I'm guessing everyone downvotes you for the very strange implication that most software stores IP addresses in ASCII. All networking APIs I'm aware of expect IPv4 addresses as a DWORD.
This is the point, instead of rewriting a full stack, I would rather change the prototype of these APIs.
To store 999.999.999.999, then you are totally fine with a 64-bits INT (QWORD), and there is no struggle to backward-compatibility store a 32-bits INT (DWORD) into it.
It's more of a matter of doing #ifdef IPV4_EXTENDED #define DWORD QWORD #endif
and add an extra IP field inside the IP header packet itself that says, "this is the IPV4_EXTENDED DESTINATION 5-bytes IP", and the previous field is marked a legacy/deprecated.
In fact, it's quite convenient, since we are all INT64, sockaddr_in would largely fit in an INT64 for both IP itself and the other elements that are in the struct.
5 bytes for the sin_addr field is enough to store until 999.999.999.999.
Gives you 3 bytes to store the port etc.
The networking APIs guys could be drinking cocktails at the bar by now, if they would change these types.
There is backward compatibility and smaller effort for a great impact, and this is beautiful.
It's actually beneficial for the majority of developers.
From the developer of Windows, to the developer of Age of Empires, to the developer of a CRUD app on the web (who stores IP addresses as a string or as an int), they wouldn't see too much struggle to port to int64.
Less than having to build a full new IPv6 experience.
In practice, client apps, at the time you open a new socket, if your lib says it wants an INT32 or an INT64 it doesn't matter for the developer of that app, since type is automatically casted.
time() had a similar situation.
We migrated by adding new bytes, we didn't redefine the concept of time.
From a developer-experience, "link to the latest version of the network library, oh btw, connect() accepts an int64" and remove the UI restriction of 255.
It could even be possible to give compatibility to very old software that we lost source-code from by overriding the network layer with LD_LIBRARY_PRELOAD or equivalent, and patch these softwares by manually NOP the right JGE instruction (the asm code for " >= " ) that checks if we are over 255.
So you need to send a message from your host 5.6.7.8 to one of these newly enabled hosts 500.600.700.800. You update the software on your host, and your target's ISP is updated, and your target updates, and we'll even hand wave and assume your ISP is updated despite apparently having enough legacy addresses to allocate you one.
The message goes out to your ISP router, who then sends it to their upstream ISP, who looks at the IP message, doesn't understand whatever header you've shoved the extended address in, and discards it. Then what's in your standard, backwards compatible 32 bit field? The start of the address? Does your packet go to some other random host on the internet? A placeholder address like all 0s? Does your message get discarded?
How do you convince that middleman to update their hardware? They have no benefit from it? This is the situation IPv6 was in for decades until their literally were not enough IPv4 addresses which finally lit a fire under companies to start enabling it.
(I'm not pushing this idea to the max, I mean, now IPv6 is here so we'll just go with it, but this is for the mental and engineering exercise).
To answer your question, in my model, the legacy IPv4 field contains the IP addresses of "IPv4 to IPv4 Extended bridges".
Let's imagine you want to connect to [example.com]:
Clients who speak IPv4 Extended and their ISP is compatible, get the IPv4 Extended answer:
425.223.231.123 A+ example.com
and directly to it
Clients who speak IPv4 Extended but don't have an IPv4 Extended compatible ISP, add that extra IPv4 Extended header and speak to the bridges.
425.223.231.123 A+ example.com
34.23.12.2 BR example.com (the bridge)
Clients who speak IPv4 only but don't speak IPv4 Extended don't have to think about IPv4 Extended at all, since they will go through the usual layer-7 (typically HTTP) reverse-proxy, or a routing based on rules (ip/port pair).
Cloudflare does search large scale reverse proxies, it works fine in practice.
If someone has an incentive to run such bridges or reverse proxies solution, first it's yourself, to save your preciouses IPv4.
To the end user the promise is "you will connect faster to the internet if you are in native IPv4 Extended (because you skip these intermediate bridges)"
We actually have a nice mechanism that we could reuse for knowing which bridges to use, it's reverse DNS lookup.
> In practice, client apps, at the time you open a new socket, if your lib says it wants an INT32 or an INT64 it doesn't matter for the developer of that app, since type is automatically casted.
A lot of networking gear is far closer to an ASIC than a general-purpose CPU, so you can't "just change it to int64". They were built to process 32-bit addresses, and are unlikely to be able to swap to 64-bit without enormous performance penalties.
E.g. routing tables would balloon in size, which in practice means that you can store far fewer routes. Ignoring changes in the size of the netmask, it's 4x the size to store 64-bit address pairs, so your route tables are a quarter the size they used to be.
The hardware refresh requirements are a big part of the reason why IPv6 rollout is so slow, and your proposal doesn't avoid that. Getting the software side of things to play nice has always been the easy part of this, even in IPv6.
> It could even be possible to give compatibility to very old software that we lost source-code from by overriding the network layer with LD_LIBRARY_PRELOAD or equivalent, and patch these softwares by manually NOP the right JGE instruction (the asm code for " >= " ) that checks if we are over 255.
In IPv6 land, you just encapsulate IPv4 in IPv6 [1]. It's a lot cleaner than jankily trying to override instructions, especially when the relevant code may run on your NIC rather than your CPU and require kernel firmware patches (or, god forbid, custom NIC firmware) to implement.
and what about the protocol bytes that go over the wire - you know, the most important and hardest to change part?
There've been several proposals to make "IPv4 but bigger addresses". All of them are just as hard to deploy as IPv6. You still need to upgrade all your routers and you still need to run two parallel networks.
If it's going in the same spot in the packet header as the current IPv4 address, how do you make sure that the 20-30 routers owned by 3 different companies that are likely to be between your computer and the destination computer exhibit a behavior that is consistent with moving packet closer to the destination?
(If they don't, you've just made a version of IPv6 that is worse-- it's missing the last 30 years of IPv6 implementation.)
It's written above, bridge destination address in the "legacy" IPv4 destination header, and that bridge can be figured out by looking up the reverse dns entries on a IPv4 Extended IP, until the user is natively using an IPv4 Extended network.
This brings the packet closer to the destination.
The new address goes into the Options field, you can store lot of data there (somewhere up-to-60 bytes, and we need 1 or 2 byte actually).
Reminder: The goal is to add one-byte to have more IP addresses, not rewrite the whole internet.
Here it looks like the guys wanted to fix that IP allocation problem, and then they went all-in, and decided to rewrite everything at the same time.
It's ok, and even a good idea in theory, but network administrators don't like to be pressured "in emergency" into upgrading to this solution.
The practice shows that people rather prefer doing NAT than IPv6.