I have occasionally wondered about this, specifically from a human factors point of view. I recall an exercise a professor conducted in my intro to psych course as an undergraduate. Try to remember and recall this sequence of numbers:
And then try to remember and recall this sequence:
1776...1812...1865...1918...1945
Pretty much everyone finds the latter easier even though they are effectively equivalent. This was done as a demonstration of Miller's 7+-2 model of working memory.
I sometimes wonder if the reason IPv4 continues to stick around and IPv6 hasn't gotten the uptake we need is because the former fits into the memory models needed by the end users (developers) whereas a space of 2^128 instead of 2^32 starts pushing the boundary of what a human operator can easily keep in memory.
I mean, those numbers would be significant to an American with some grasp of history. That doesn't seem like the same thing as memorizing 5 random 4 digit numbers.
I agree though that IPv6 has some human factor problems. You can actually use IPv4 addresses in it like ::10.0.0.1 - that's valid IPv6 if you wanted to use it on your own networks at least.
Sure, this was a class at an American university, the context of those particular numbers is important as an example, but the exercise likely transfers to other relevant dates in other countries. At the core of it though is that arguably there are IPs (numbers) in any org that might be worth committing to memory because systems can and do fail, and for humans it seems to me that encoding IPv4 is inherently easier than encoding IPv6. My (untested) hypothesis is that this may be at least somewhat of a contributing factor to the slowness of IPv6 adoption (IMO the other more prevalent one being the market factors re. IPv4 scarcity). I just thought it was an interesting parallel to share.
Interesting reference I'm not sure though how many people actually keep IP addresses in their memory. Even with IPv4 I usually copy paste the entire address unless I'm sure only one of the octets have changed. I wonder if others do the same.
For ipv4, i will often memorize an address for short periods of time, to type them in somewhere else, or to be able to recognize a particular address in logs or something.
Once you make any change, even just "add another octet", you're into the chicken and egg problem IPv6 faced until recent years forced the hands of some ISPs. Given that IPv6 now has something like 20% adoption, there's little point starting over with something people view as simpler because of smaller changes to the written representation of addresses.
There is alot more changes then just an expanded space with ipv6
So while you may have a similar problem it would be FAR FAR FAR FAR easier for organizations to adopt an addressing system that works EXACTLY the same as ipv4 just with a larger address space
then all the other "improvements" they are forcing down everyone network (unwanted) in with ipv6
ipv6 spec caused all the problems by biting off more than what was needed to solve the problem.
Just start with 1::a: for the first 96 bits and duplicate the entire ipv4 range into the remaining sections (I.e. 8.8.8.8 would become 1::a:8:8:8:8) which would still leave a huge amount of space open from a's-f's being technically available as identifiers (i.e. 1::a:8:8:8:f would be a valid ipv6 address).
Then, as people stop using NAT, let them have access to the 1::b: section, and if that section ever gets filled, 1::c:, etc. Once NAT is sorted, roll out routers that would NAT ipv4 to 1::b: addresses so that they are in both locations at the same time.
It's easy enough to mentally switch to using colons instead of periods, and everyone could also easily memorize their opening "street sign" or whatever it is called.
Phase one has us use something in tcp headers, maybe something in the reserved area, to set a number. It's OK for that to just be maybe 1 to 4 even, or something small depending upon bit size requirements.
Since old systems won't use that reserved header space to determine anything, they'll ignore it. And new systems will exclusively use unroutable address space, like 240 being discussed here, for routing.
So old systems will drop the 240/ address space, but new systems will route it, as it will be 2.240.x.x.x. So only compliant systems will see the new address space ; the rest won't.
It won't help old systems, but it will mean new systems only using old address space, can speak to old systems, without breaking them.
Everyone loves sensible change, so unlike ipv6, ipv8 will only take 20 years! (What's ipv6 been out for, 25 years and not adopted yet?)
Rather than changing TCP, we could just make web browsers query SRV records to see what port to connect to. Then you could host 65535 websites on the same IP address, without requiring any software in the middle to check Host/ALPN/SNI. (The web is moving to UDP anyway with HTTP/3. Not saying the web is the only important part of the Internet, but it's a big one.)
IPv6 is kind of over the major adoption hurdle of rewriting all software to understand it. Nearly all software understands it. I'm not sure anyone really has the appetite for that again.
People like to make fun of ipv6 adoption rates but I'm pretty sure we'll all be dead and dust before srv becomes much more than a niche curiosity. It has similar chicken and egg problems but much less financial motivation behind it.
Honestly, if IPV6 had been v4 but with 2 extra octlets, I believe we would be at close to 100% adoption. And also wouldn't run out of ips anytime soon.
They haven't done many of the changes for fun but because it makes some things work better i.e. using multicast instead of a broadcast in many places compared to IPv4. Enabling and making autoconfiguration a lot more integral. Making the smallest standard prefix /64 (with P2P being /127) - eliminating basically all thinking about how many hosts can I fit in a subnet.
There are things with MTU and fragmenting that are partially more complicated but in a sense also simpler than IPv4.
I've been thinking, we really just need lots and lots and lots of IP addresses, far more than we realise right now. ipv8? Should probably be ipv32 or some such.
An example ; when we start throwing smart nano on the grass, to see how each blade of grass is doing on your lawn. Well, each nano is going to need an IP address, and so will all in the neighbourhood. And that's just the grass.
What about when some scientist wants to track sand dispersal patterns, on a beach, after a hurricane? And wants to have each grain of sand tagged with its own nano hitchhiker? Just think of the IP addresses we'll need then!
And even medicine. What if we want to interally tag each cell in our bodies with nano? What if we want to bioengineer our body, so that each cell has its own IP address? And can report health/condition?
No, we need more, more more IP addresses! So many more.
> What if we want to interally tag each cell in our bodies with nano?
There will need to be multiple addresses for all the nano-services running inside each of those nanobots. Solution: install a k8s cluster with each nano...
It's something like a /48 for each second m^2 of Earth's land surface. Single addresses are not a useful metric, as IPv6 works a lot more with prefixes (groups of addresses, to simplify management and allow things like auto-configuration). The smallest prefix that you would assign to a VLAN with auto-configuration is /64. So /48 (or 65 536 subnets) in IPv6 would be the equivalent of /16 (like 192.168.X.X but global unicast) in IPv4. That is what normal businesses usually get. Home customers should get a /56 of IPv6, that still leaves space for 256 subnets with basically unlimited hosts in them. Not too bad I think.
I never got why they had to make it that large. I doubt we'll ever want to have that many devices with their unique ip. So why not have a simpler system that makes switching easier? Even one octlet more in ipv4 would've sufficed for a very long time, two would've probably been enough forever.
The first is, they were working on this before the Internet was widely used. Back in dialup days, pre-2000, the draft presented in 1998, working on it in 1995 and before surely.
Compared to today's scope and size, this was nascent/early style change, in something which was constantly changing.
They didn't see any contention likely at the time. Why not have all the universities, government departments, and research bodies switch? This was an entirely different landscape compared to today.
So in their eyes, why not make change? It wasn't a big deal, hell back in the early 90s, people were using token ring adapters/networking still in many offices!!
The second thing is, NAT wasn't a thing back then. The computational power was a limiter for large scale usage.
An RFC for NAT came out in 98 I think, and Linux had ipmasq, but that was brand new in the early 90s.
Basically, ipv6 was crafted before anyone had any idea we'd be where we are today.
In fact, the worry about address space exhaustion in the early 90s was due to a lack of NAT, or even the idea that it could be deployed everywhere at scale.
Where would all the compute power for NAT at scale come from?!
So basically, it was crafted with a different viewpoint.
ipv6 is windows me. Skip over it. Wait for ipv8.
edit: Oh! We could make ivp8 just 16bit instead of an octlet!