Hacker News new | past | comments | ask | show | jobs | submit login
IPv6 Fragmentation Loss (potaroo.net)
96 points by oedmarap on April 23, 2021 | hide | past | favorite | 82 comments



The biggest single cause of network engineer hair loss must be MTU and fragmentation. We had some customers over a tunnel (IIRC, it was MPLS) that reduced the MTU from 1500 (default Ethernet) to 1496. This happened while our vendor changed some equipment and forgot to update either MTU to accommodate the extra MPLS header. Of course, there was a misconfigured firewall that wouldn't fragment and wouldn't send "ICMP won't fragment".

The result? Most DNS queries went through. All chat applications worked. SSH generally worked, except when you started a full-screen terminal application. Smaller web pages load. Larger web pages loaded only partially.

Imagine non-technical users explaining their issue. "The Internet is half-broken. Please help."

God only knows how much hair I would have today if the world had figured MTU and fragmentation properly.


> Of course, there was a misconfigured firewall that wouldn't fragment and wouldn't send "ICMP won't fragment".

Blocked or dropped ICMP has caused me heartburn as well. I am pretty sure my co-workers are used to my "blocking ICMP is Evil Bad and Wrong" rants by now.


AWS blocked ICMP by default though and cause a lot of traceroute problem for me...


> If the maximum transmission unit (MTU) between hosts in your subnets is different, or your instances communicate with peers over the internet, you must add the following network ACL rule, both inbound and outbound.

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network...

If your instances communicate with peers over the internet. So by default, AWS was built to assume your instances do not communicate over the internet, lol.


Some people don't want to deal with ping floods, sweeps, ICMP tunneling issued and the whole ICMP redirect attacks?


There's a difference between blanket blocking all ICMP and selectively blocking or rate-limiting a subset of ICMP messages according to an accurate threat model for your various networks. The latter, properly done, is unlikely to break MTU discovery or your own ability to troubleshoot and monitor your own hosts/networks. The former is what makes network engineers want to eviscerate you and use your gut as patch cables.


IMO, the crux of the issue is that kind-of-necessary and dangerous functionality is commingled in "ICMP", with no trivial way to separate the two.

With usual TCP and UDP, you block by default, allow outgoing, allow replies and your internet works (consumer defaults). If you want, you allow specific incoming ports and it's good enough for most use cases. It's almost trivial to configure a reasonable basic firewall, and you can learn all you need in 5-10 minutes.

With ICMP, there is this long list of types and subtypes, some of which sound dangerous while others necessary. Which ones should you block and which ones should you allow?

Do you expect everyone to create an accurate threat model? People have lives. So, some people end up blocking ICMP and internet "works" except .. of course it doesn't.


> Do you expect everyone to create an accurate threat model? People have lives. So, some people end up blocking ICMP and internet "works" except .. of course it doesn't.

This is the internet eqivalent of dumping your sewage in the lake because it's too difficult to deal with properly.

People that do this externalise the costs of their choices onto everyone else, and it takes a bunch of specialists to identify what's happening and educate the problematic network operator, and/or clean up the mess.


In a way yes, but I think the issue is largely caused by those up the chain that make it hard to deal with.

Doing a search on should I block ICMP, or what ICMP to block, answers are:

[1] No!!; some security issues; a lot of ICMP should be blocked; suggests further research

[2] you should selectively filter; example iptables rule to allow echo; assess evaluate and make your own rules

[3] listing of types and RFC recommendations for transit and local traffic

So I guess I should take "Should Be Dropped" and "Policy Should be Defined" from [3] and plug them into [2]. Why is it so hard?

Why isn't the answer: "No, defaults are safe, no need to block anything", or "Select one of: Endpoint / Site firewall / Internet router; customize if needed"?

IMO, this is a mess. This is the ultimate cause of all the sewage, time spent both debugging, and even more on learning what and how to block, configuring it all. Issues like this even hinder IPv6 adoption, because who is going to deal with all the complexity.

[1] http://shouldiblockicmp.com/

[2] https://blog.securityevaluators.com/icmp-the-good-the-bad-an...

[3] https://serverfault.com/questions/981558/which-ipv4-6-icmp-t...


> In a way yes, but I think the issue is largely caused by those up the chain that make it hard to deal with.

> Doing a search on ... answers are:

Sorry, "Google search results" are not "up the chain". That's your first mistake.

There's a tonne of misinformation in Google search results. It's real hard to identify what's good and what's not if you're not a specialist in the field, so it's easy to fall victim to this and just believe well meaning well written blog posts.

Don't do this. When you don't know something, speak to someone who does know about the subject matter at hand. Not anonymous people on the internet, but someone in reality that you can have a conversation with. If it's a topic that's vaguely within the remit of someone you work with, that's a good place to start.


I always saw ICMP blocking easy ability test for the firewall configuration person. If they block them this means they have no clue about the protocols they're configuring, and it's safe to assume that they likely got other things wrong too.


I consider the time I spent implementing RFC 4638 to allow the PPPoE implementation I use to have a 1500 byte MTU to be a good investment.


I wish centurylink fiber supported baby jump frames. :/


Would make more sense for them to just do IP over fiber directly than fiddling with their PPPoE implementation.

I have centurylink DSL with PPPoE and the thing that really bugs me is if your modem lost the PPPoE password, it can login with default credentials and ask for the passsword. CenturyLink clearly knows who I am without needing PPPoE authentication, so what do they get out of it?


My local provider requires PPPoE over fiber but accepts any password. It’s probably some legacy artefact that’s too complicated to remove from the stack.


Dollars to doughnuts the PPPoE implementation is tied into a billing system.


It makes wholesaling your network easy since you can tunnel switch based on the user@domain used during authentication.


baby jumbo frames.


haha. yeah, not sure how jump slipped in there.


The world did, then people came along and started blocking ICMP


Yes. There is a special circle in hell for braindead firewall admins filtering ICMP. Unfortunately happens often enough that "MTU problem" is the first thing that comes to mind at the grandparents description. Next in line would be "broken DNS"...


They are not braindead. ICMP used to be a bit broken and folks got sick of ping floods, sweeps, redirect attacks and tunneling hacks etc. Some pretty major places block ICMP as a result to keep their network reasoning and security analysis simpler.


Just because they are major places doesn't mean they are right or such practice buys any improvement in security worth the operational cost it imposes.


ICMP ping of death was a thing in the 1990s. Those bugs have been fixed for more than 2 decades now.


What do ping floods do that TCP syn floods don't? (I expect you run at least one service on the exposed hosts)

Also what system even accepts ICMP based route redirects by default from the open internet?


And when I am in my less charitable moods, I would claim that said people are unqualified to not just to configure networks, but to have opinions worth considering when it comes to setting network policy.


Unfortunately the only qualification required is for you to want to communicate with their network.


In my utopic world, we wouldn't need "ICMP won't fragment". IPv4/IPv6 would have the MTU of exactly 4096 bytes.

Why not a higher or lower level protocol? Because this is the lowest level protocol which is end-to-end and has a path MTU. Lower level protocols would need to somehow handle the MTU and higher level protocols would need to fit their segments within the MTU.

Why 4096? Because this is a common page size on computers. Multiple of this would likely bring little benefits, due to GRO/GSO.

Also, it's my utopic world, so if you don't like it create your own. :))


4096 can be too much for unstable connections, for example Wi-Fi or mobile.

Also, 4096 bytes at what level? Layer 2? And when you are using a tunnel, you add bytes and thus have to go higher than 4096 or go lower and reduce MTU, or fragment the packet to keep 4096 bytes.

You are just raising the limit, not solving the problem.


Not much of a utopia if all you do is say the same problem is now unsolved at a different layer ;). Nothing special about the layer below IP that allows it to fix the problem in a way IP couldn't be made to - it's just an annoying problem to fix efficiently.


This is the network version of ‘640k should be enough for anyone’.

Don’t fixate this kind of policy in such a fundamental protocol.


The issue is how the network are glued together, your X MTU will be embedded in a VPN tunnel, PPPOE - and that device needs to tell your device about its MTU


How would this work for IP over IP networks?


what is even more annoying is that there is a perfectly fine solution for this problem called MTU pad discovery. sadly many people block the entirety of ICMP to prevent this from working properly. Also, some NAT devices just straight up lie about their MTUPD size.


Websites half working should send alarm bells ringing that it’s an MTU problem. The reason google.com works is because they only support an MTU of 1280.


The minimum size dictated by IPv6, and thus almost always guaranteed to work.


I have an interesting uplink that drops packets of size about 1430-1470, but has no issue with smaller or bigger packets, even 1500.

ICMP is rate-limited, and 1500 byte packets work, so PMTUD doesn't kick. Hence, the connections hang if you hit the magic size.

Had to decrease MTU. Unfortunately, that doesn't solve remote-side UDP.


Why would you reduce the MTU? I had an issue many years ago playing league of legends where log in would fail until I changed the MTU to be slightly lower. Something like 1486.


Tunnels always have the danger of reducing the MTU because they have to account for the header.

Worse, is that if a tunnel doesn't reduce the MTU and its packets are dropped by a router further down the line the the ICMP TOO_FAT response back to the tunnel endpoint not the original host. The tunnel endpoint has zero clue what to do with it and drops it on the ground, leaving the original host in the dark.

Even worse is when overzealous firewall admins start locking down everything they don't understand and that includes ICMP. "Some hackers could ping our networks!" Then you're truly up a creek.

Luckily MTU breakage is fairly easy to spot once you know what to look for. It's easy enough to fix too if your local firewall admins aren't the block everything type. Just fix the MTU on the tunnel endpoint and suddenly everything will start working.

One final note: If as a router or application developer you ever find yourself having to fragment packets, please, for the love of all that is holy, fragment them into two roughly equal sized packets. Don't create one huge packet and one runt from the leftovers. It hurts me to see packets coming in from some heavily tunneled source with 1400 byte initial packet followed by 3 or 4 tiny fragments. Or worse, 3 or 4 tiny out of order fragments followed by the 1400 byte bulk because the MAC prioritized small packets in its queue and send them first.


> One final note: If as a router or application developer you ever find yourself having to fragment packets,

Ideally, you should not want to fragment IP packets. It is far better to do MSS clamping in TCP to prevent the size of the TCP payload to grew above the size of the max MTU in the path. TCP can stitch together the data at the other node just fine, without routers in between having to fragment the IP packets, which will kill your bandwith in comparison.


The "application" in this case is for example a packet->radio bridge application, or a VPN tunnel endpoint. Something that is chewing on packets mid stream where you don't get much choice in the matter.

One thing I like about IPv6 is the minimum MTU of 1280. That's big enough that if I'm really uncertain about the environment I can just set my MTU to that and avoid future headaches without impacting performance too badly. IPv4's 576 minimum nearly triples the number of packets you generate, which is really noticeable when you're running routers close to their limit. Forwarding speed is often dominated by packets per second, not bytes per second.


>IPv4's 576 minimum nearly triples the number of packets you generate

IPv4 minimum MTU is... 68 bytes.

>Every internet module must be able to forward a datagram of 68 octets without further fragmentation.

https://tools.ietf.org/html/rfc791


1500 bytes is the de facto standard maximum size of an ethernet frame (not including headers). Therefore, if you want to send data over Ethernet, for example, an IP packet you can typically only send up to 1500 bytes at a time, and if you try to send more it gets dropped. This is called the Maximum Transmission Unit (MTU).

Apparently, their customers were tunneling IP packets through another protocol, meaning that instead of sending an IP packet in a Ethernet frame, they were sending an IP packet in an X packet in an Ethernet frame. Since, like IP and Ethernet packets, the X packets need to contain some information related to the protocol, there was less room for the IP packet. I.e. the MTU was lower.

When you set the MTU on your Operating System (OS), it refers to something slightly different. Instead of "this is the maximum size packet that will fit", it means "assume this is the maximum size packet that will fit". You can use that setting to force your OS to send smaller packets if you know the MTU is lower than your OS thinks.


1500 payload bytes is not de facto, it's part of the actual 802.3 standard. Any other payload MTU is actually non-standard (and the IEEE has went out of their way to avoid standardizing other payload sizes. Frame size had a bump with 802.3as but still expressly left payload at 1500).

MTU doesn't refer to anything different in that case, MTU always refers to the maximum size frame "this" node can handle. It also doesn't always mean the OS assumes it's the maximum size packet that will fit on a path it's the maximum size Ethernet frame the OS knows will fit on the NIC. The OS has other methods for assuming things about a path. Forcing MTU lower does force the OS to assume any path is never more than the MTU though which is why it works as a fix.


MTU mismatches are hard to solve too. This happens when you have nodes and/or switches that are either misconfigured or don't all support autodiscovery and then have divergent defaults.

A typical symptom is that things (e.g., ssh) hang. This typically happens when you use a protocol that uses Kerberos in an Active Directory domain with very large Kerberos tickets. PQ crypto would do this too.


Couple of years ago I wrote a tool to check if end-hosts are complying:

http://icmpcheckv6.popcount.org/

(v4 version http://icmpcheck.popcount.org/ )

it answers:

- can fragments reach you

- can PTB ICMP reach you

hope it's useful. Prose: https://blog.cloudflare.com/ip-fragmentation-is-broken/

Notice: it's easy to run the tests headless with curl if you need to see if your server is configured fine.

Fun fact is that it's very much not easy to accept/send fragmented packets from linux. I learned the hard way what `IP_NODEFRAG` is about.


Wow this is great, thanks for hosting this!


Thank you very much for this! One small thing I noticed: the curl examples on the v6 page call the v4 version.


Woah, I used this tool all the time while debugging hoppy.network! Do you still work at Cloudflare?


The article conclusion is basically that IPv6 extension headers (fragmentation is one of them) are useless on the Internet, which seems pretty reasonable to me. They're basically a research tool.

The lack of on-path fragmentation in IPv6 is definitely on purpose. It was a mistake in IPv4 and would be silly to replicate in IPv6. The fragmentation header in IPv6 is effectively useless. It can only be done at the endpoints, and if that's the case the application should be doing it, not the stack. Instead IPv6 mandates path MTU discovery, which is the correct solution.


Based on how fragmentation is handled (generally poorly, often because there's little choice), I would have preferred truncation with an in-band signal. For TCP, truncation is a clear win; you get some of the packet, and can signal back to the other end that things are missing, and hopefully the other end adapts to stop sending packets that get truncated. (Of course, when a middle box uses large receive offload to combine the packets and then complains that they're too big to forward, it's hard to fix as an endpoint).

For UDP, it's not so simple; IP fragmentation does allow for large data, all or nothing processing, without needing application level handling, but the cost of fragmentation is high.

The out of band signalling when sending packets that are too large is too easy to break, and too many systems are still not setup to probe for path mtu blackholes (the biggest one for me is Android), and the workarounds are meh, too.

Another option would be for IP fragments to have the protocol level header, so fragments could be grouped by the full 5-tuple (protocol, source ip, dest ip, source port, dest port) and kept if useful or dropped if not, without having to wait for all the fragments to appear.


Truncation creates even more work actually. IP routers don't need to understand deeper protocols or do anything with them to fragment, simply split the IP packet into fragments and recalculate the IP checksum (and in v6 they don't even need to do that). To do this with truncation you have to know how to parse the inner protocol headers and modify things like length or checksums in those as well.

You get the same issue putting the information on the fragments. Now there is no "IP layer" there is just "well we're using IP+UDP today and how that is right now should forever be baked into this hardware that will be here for 20 years" which is excatly the problem that led Google/IETF to push headers deeper with HTTP/3 to get out of that mess.

You also can get an in-band signal that you're being fragmented in the middle without changing IP. E.g. TCP already negotiates an MSS, if IP fragments at the start of a group come in smaller than that you know there is something fragmenting in the middle.


Minimum work for routers would be mark the IP packet as truncated, and adjust the IP checksum; upper level protocol can suck it. This is, of course, more work than dropping it on the floor and pretending you'll send an ICMP, but much less work than sending two (or ocassionally more) new packets.

In the middle fragmentation is not really something that happens very often. IPv6 prohibited it, but in IPv4, nearly all packets are marked do not fragment, because IP routers weren't fragmenting much anyway; I think it's more likely to get an ICMP needs fragmentation packet on a too big packet with Don't Fragment, than to actually get fragmented delivery.

Also, MSS is mostly not a negotiation; most stacks send what they think they can receive in the syn and the syn+ack. The only popular stack that sent min(received MSS, MSS from routing) was FreeBSD, but they changed to the common way in 12 IIRC; which in my opinion is a mistake, but I don't have enough data to show it... actually what seems best is to send back min (received MSS - X, MSS from routing), where X is 8 or 20, depending on if you more of your users are on misconfigured PPPoE or behind misconfigured IPIP tunnels.


At that point it pretty much amounts to "send a message in a raw IP header to the destination rather than a message in an IP+ICMP header to the source". The extra step of truncation of the original payload doesn't gain you anything except still doing more work on the router than v6 would have it do to gain a lot of pain for the upper layers/endpoint stacks all so the inner session doesn't have to resend the first part of the first packets of a conversation.

The vast majority IPv4 traffic does not have the DNF bit set. Your logic of why they would doesn't even make sense as setting DNF only means it'll drop on routers that would fragment have fragmented not improve the situation with the ones that wouldn't have.

MSS is definitely a negotiation but a negotiation just between the TCP aware nodes not along the whole IP path which is why I say endpoint stacks can use it to detect if the IP path is fragmenting by comparing incoming IP fragment sizes to the MSS.


> The vast majority IPv4 traffic does not have the DNF bit set.

I don't think that's correct. Windows, Linux (including Android), FreeBSD, macOs and (Apple) iOS all default to sending Don't Frag. Together, they form the overwhelming majority of IPv4 traffic.

Nobody actually wants fragments, and very few fragments are seen in regular activity. When I ran high traffic servers, I would normally see a few fragments a minute per server, except for when we were under a chargen reflection DDoS attack; Microsoft Services for Unix sends back a hunormous UDP reply which is of course fragmented, and that caused some trouble with fragment reassembly buffers (dropping the buffer size to the minimum solved that, although during a DDoS the couple of people sending fragments would have a bad time; can't win them all).

I think sometimes there's fragments in large UDP DNS replies; but it's generally best to avoid large UDP DNS replies because they often get dropped by poor decisions in network design and software, and there's probably a better way to do what you need to do.

Stacks could use MSS and fragment size to do something cool, but they don't; in part, because nobody sends fragments anymore, and in part, because there's a lot of other cool stuff to do so path MTU gets left behind in a lot of places, like Android. :(

Edit to add:

> At that point it pretty much amounts to "send a message in a raw IP header to the destination rather than a message in an IP+ICMP header to the source".

The thing is, we know that the router that's in a position to truncate the packet can probably get a packet to the destination (otherwise, it wouldn't have this packet). We don't have anything to indicate that it can get a packet to the source; and it turns out that often it can't. An in-band indication would be so much more useful than an out of band (which doesn't work), or fragmentation (which people don't want). Yes, the indication goes to the wrong party, the destination will get the information, but the source needs it; but most interactions on IPv4 are two way, so presumably the destination can tell the source that it needs to send smaller packets, again in-band with the communication it already has, that presumably works.


Sorry you're right the default stacks do set DF on v4, my bad. Otherwise they wouldn't be able to get the PMTUD message so they know to change MSS as when fragmentation is enabled v4 won't send PMTUD ICMP back (and PMTUD working out of the box most of the time is preferred to fragmentation all the time). Had that backwards in my head.

Yeah no stacks do it that I know of, just a theoretical way one could without having to rely on in-path devices sending messages or dropping traffic. I think the v6 path of drop it and send it back was better though. If it's blocked that's their fault :p.

Regarding sending the "message too long" note via in band channel instead of its own ICMP message I actually think that's a great angle as it'll get through more FWs I'm just hung up on trying to do something the truncated payload once it arrives... but I guess that doesn't really matter from a protocol perspective - the important bit is the note on the MTU issue arrive in band and the OS can decide what to do with any remaining payload for inner protocols after that as it's not IP's problem. v6 already got rid of the IP checksum so the only other things that might need to be updated are NICs/FWs/NAT boxes that check TCP/UDP/protocol checksums to ignore them if the header had that note.


> The thing is, we know that the router that's in a position to truncate the packet can probably get a packet to the destination (otherwise, it wouldn't have this packet). We don't have anything to indicate that it can get a packet to the source; and it turns out that often it can't.

Another point to add to this is that many routers will rate limit control plane traffic (ICMP, BGP etc), or prioritize data plane traffic ahead of it, so the ICMP too large message is more likely to be lost than a hypothetical in-band indication. It's the same reason you sometimes see packet loss in the middle of a traceroute, but not the end.


Part of the problem with IPv6 extension headers is that there can be an unbounded number of them. No hardware designer wants to deal with that, so just dropping all of them is far easier.


Or we should stop thinking about processing packets with hardware. Does it still make differences these days?

Designing hardware for a single purpose most of the times is not a good idea. That means that you are stuck with the implementation that was implemented in the hardware for a very long period. Also, an hardware implementation can't take into account (and handle) all particular cases, for example malformed packets.

IP was designed to be flexible, you could have in theory used whatever L2 protocol you wanted, and whatever L4 protocol you wanted. Thanks to hardware implementation that considered only Ethernet as L2 protocol and TCP or UDP as possible L4 protocol anything was innovated.

The reason that we are so slow to adopt IPv6, same reason. What would have take to just adopt it if we didn't have hardware implementation of IPv4? Software update and you are done.

The problem is that in the world there are many not so good programmers that write inefficient code and thus people think that they need to implement things in specialized hardware to make them faster. You don't, at least in most cases.


For $10,000 you can get a box which can route 12.8 Tbps (25.6 Tbps bidirectionally) at 256 byte packet size. Even putting all of that money at literal CPU alone (i.e. not the rest of a server or interfaces) you can't route at that performance, let alone match the feature set of the hardware before talking about extra things one could add in software that would slow it even more.


what box is that?



Software packet processing simply does not scale to the levels that hardware packet processing does, and that's not even mentioning the cost issues involved in provisioning and operating enough CPUs to do it. Maybe in a few years now that we're seeing higher core counts become common in non-server hardware...


Without knowing how hardware forwarding engines work at any kind of detail, I would be wary of throwing more cores at this kind of problem, because you would need to decide how to divide up your work between cores. You could use a 5 or 3-tuple flow hash, but then you're limiting the speed of a single connection to what a single core can do. And if you use any other way to divide up work, you're bound to get out-of-order packets which kills TCP performance.


> Or we should stop thinking about processing packets with hardware. Does it still make differences these days?

Yes, yes it does. CPUs will never catch purpose built chips for networking.

Networking chips are getting more programmable, though. The most known effort is P4 (https://p4.org). Even for vendors that don't support P4, they are still internally making their hardware more fungible, which is good.

> The reason that we are so slow to adopt IPv6, same reason.

IPv6 has been slow to adopt because it solves a problem most people have not had, in an overly complicated way.


I could be wrong, but I would think high packet rates with large routing tables (default-free zone) would still need a TCAM.


Also, hardware based offloading is a godsend for reliability aswell.

You can upgrade the control plane of a router with traffic still flowing thanks to the data plane being seperate. You can even make another router do control plane for the data plane in another router. Not to mention the massive performance benefits.


> Does it still make differences these days?

Have fun routing several Tb/s of traffic without dedicated ASICs....


There are "only" 256 extension header IDs which gives an upper bound. Realistically you're never going to be able see more than 20ish to process on a packet because after that you're likely requiring the packet be >1280 bytes which would create problems anyways - unless all of the sudden they start pumping out 8 byte extension headers like candy.


Regarding the half working internet it comes to my mind: We have remote Linux boxes accessible via ssh tunnels. Very rarely, but repeatedly the sudo password is not accepted. You type it extra carefully one by one, you copy-paste it, it just doesn't work. You angrily complain to the team has anybody changed the password in that box. Nobody has. Typically others say it works for them, in rare cases they can even confirm it at the same time. After a while, 10 or 20 minutes it just works again. It does not happen often, but in the meantime everyone in the team who initially claimed this cannot possibly happen has experienced it.


TCP doesn't lose traffic, it blocks and retransmits (until the connection itself breaks). it cannot lose just the password bytes entered to sudo and still transmit the rest

so unless the connection didn't respond to anything at all after it failed to accept your password, your problem is not with the ssh tunnels or due to path/mtu issues themselves.


> TCP doesn't lose traffic,

I know. And if it did you would see it in commands you type, too. Not just in invisible passwords.

I just said this mystery came to my mind when reading the story of being able to communicate with som web servers but not others. Not trying to claim we have that same MTU problem.

Actually now that I type it, I had the problem of half-broken internet myself in the past when roaming internationally with my phone. I needed to use a VPN to contact certain web servers. Need to keep this in mind when roaming becomes relevant again some day... (This happened in the UK and somewhere in the EU and affected e.g. my online banking and a Linux Foundation conference web site. I don't think censorship was the culprit.)


It seems will all "standards" there is the published standard as written, and the standard "as implemented" - and many of the unused corners quickly become "here be dragons".


"IPv6 was intended to be "just like IPv4, but with 128-bit source and address fields". Just make IP addresses 96 bits longer and add no more magic. But it didn't turn out that way. "

Where does that quoted portion come from. The mind of the OP author or someone else.

If IPv6 was just a 128-bit version of IPv4, I would be an IPv6 user.

As long as it continues to work, on the networks I control, I will prefer the relative simplicity of IPv6.

Relative to IPv4, IPv6 is more complex.


v6 actually is pretty close to a wider v4. There are very few new things in it, the socket API is the same for both, and parts like addressing and routing work exactly the same. It could've been very different, but it's not.

Running v6 is similar in complexity to v4, but a lot simpler than v4+NAT. Since NAT is more or less a necessity in v4 these days, v6 ends up being simpler in practice.


Meanwhile IPv6 adoption rate[1] slowed down, and apart from a few countries pushing the migration most simply don't care it seems. You would expect that global pandemic and going remote to speed up the change but for some reason it didn't.

[1] https://www.google.com/intl/en/ipv6/statistics.html


The initial rush was mostly due to mobile phone carriers, some even going as far as IPv6 only. Now that that's pretty much in place with pre 4g networks being turned off we're back to waiting on existing networks to actually convert to move the dial which has been a much slower process.


Real-life advice: disable ipV6 whenever possible. It is nothing but a deadly cancer.


Without informed elaboration, it is incredibly difficult to take such a ridiculous comment seriously.


From the perspective of a working programmer who wants to get things done, ipv4 does everything I need. ipv6 is nothing but a headache for no apparent advantage. It is poorly supported in the real world. I once had a situation where my internet provider appeared to support ipv6 but it was so buggy I could not successfully get apt to work.

After similar problems with ssh, I routinely add "-4" to any ssh invocation. I just cannot be bothered with a protocol that adds nothing to my life but problems and headaches.

I'll deal with ipv6 when I hear that ipv4 is being deprecated. As in, most likely never.


First of all, it is not reasonable to blame the protocol for your ISP's inability to configure it properly.

Secondly, I don't get this mentality that IPv6 is some totally inferior thing and I really don't get people advising others to disable IPv6 just because they don't understand it.

Statements like "IPv4 does everything I need!" are, ultimately, totally missing the point. The fact is that we have hit the scale ceiling of the IPv4 address space and other cracks are showing and now the entire world needs something that will scale for the next few hundred billion devices.

IPv6 is the protocol that will lift the scale ceiling higher, not just for you and your needs, but for everyone. It won't really change anything performance-wise, nor will it change how higher-layer protocols like TCP and UDP work, but that's intentional.

You can hold out on principle if you like but it won't gain you anything. The world will just migrate around you eventually.


I suspect you'll have a new perspective when you try and obtain a block of IP addresses and compare the prices...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: