Okay, if we're really starting to do this, then I guess I have to figure out how to upgrade all my code to deal with IPV6. I use the BSD sockets interface for most things.
I assume we'll be living in a mixed IPV4/IPV6 world for awhile. Who's got a tutorial for making a connection to a host using IPV6 if available, but falling back to IPV4 if it isn't?
See getaddrinfo(3) man page, that should be almost all you want. Making code mixed v4/v6 that way actually seems easier than v4-only solution with gethostbyname(3) and building struct sockaddr manually.
The IPv6 addressing architecture (RFC4291) requires all unicast addresses always have a prefix length of 64 bits. Using something other than a /64 will break a number of IPv6 features such as neighbor discovery, secure neighbor discovery (SEND), privacy extensions, mobile IPv6, embedded-RP (multicast), etc.
The current IPv6 address assignment guidelines call for allocating a /64 only when it is known that one and only one subnet is needed, otherwise a /56 should be allocated to small sites that are "expected to need only a few subnets over the next 5 years", or a /48 for larger sites. (ARIN policy 6.5.4.1)
--- end quote ---
This is why all those just-bit-counting "number of atoms in universe" and "79 octillion times" comparisons are wrong. Actual comparison should compare single IPv4 address with something ranging from /64 to /48 IPv6 allocation - /64 is a minimum per L2 broadcast segment - so it's a minimum per-customer allocation (unless ISP is really greedy). Also, I don't know of reasons, but RIPE NCC themselves uses /32 IPv6 block with about /21 IPv4 one comparison to calculate LIR billing score - http://ripe.net/membership/billing/calculation.html
Well we ran out of 4 billion, and much of the world isn't really online yet. Then factor in having a laptop, a smartphone, gaming console, HPTC, etc. There's more servers and infrastructure too.
It's not that much overkill either, only 4x more bits.
(recycling an post I just made on the same topic on /.)
In short:
1. an ISP would be allocated a /32. This sure sounds like a lot (2^96 addresses!) but it only is the same burden as a single address in IPv4! Even if there were millions of ISPs, this would be no burden on the address space at all. We'd lose the ability to route before we'd exhaust the addresses.
2. Then an ISP can assign a /64 to customers. Again, this sounds like a bit much, but its a tiny fraction of their space. Even if the ISP had a billion customers like this they would be completely fine.
3. This leaves a huge amount of space available for the customer to segment however they please.
4. In the simplest case you'd use a single network with 48-bits taken up by the ethernet MAC address (removing the need to do any address assignment!) Alternatively you could segment the space a bit more and run a transnational corporation's network inside the /64. Up to you!
2^128 is a really REALLY big number. No reason not to spread out a little bit.
I know you're joking, but I think the point is that the numbers are so large that we really don't have the tech right now to exhaust the space. I remember hearing in the past that 2^128 is close to or more than the number of molecules (or was it atoms?) in the known universe. We're not at the point where we can assign an address to every atom and make use of it... yet at least.
Fun fact: the square root of the number of available IPv6 addresses is 18 quintillion. Therefore, there are 18 quintillion blocks of 18 quintillion addresses. We're not going to run out anytime soon.
Not sure why the down-votes. I was surprised that they were allocating 64 bits of address space to each user as well, but I don't know the details of IPv6. I think I'd be OK with 16 bits myself. :)
Effectively unlimited is better than limited. It turns out that the difficulty of supporting an 18 quintillion large address space is not significantly different from the difficulty of supporting a 4 billion large address space. NTFS5, for example, can support file sizes up to 18 quintillion bytes, and it's a decade old.
> NTFS5, for example, can support file sizes up to 18 quintillion
> bytes, and it's a decade old.
IIRC, NTFS has loads of advanced features, but Microsoft is slowing "turning them on" by adding support for them in Windows. I'm sure Windows would blow up if you got anywhere near that limit.
Is anyone else surprised that Comcast has actually gotten off their ass and decided to start going ahead with this? I guess my expectation was that Verizon would be headlining IPv6 and not Comcast.
FWIW, we hit 256gb, and we got a call, threatening if we passed it again in 6 months we would be banned from Comcast for 12 months. (Someone in the house accidentally left a torrent sending for 3 weeks)
Perhaps whether they call you after the limit depends on why you went over the limit. It should be easy to determine if the reason you went over is a torrent (even if they can't figure out what torrent), video streaming, or something else.
I assume we'll be living in a mixed IPV4/IPV6 world for awhile. Who's got a tutorial for making a connection to a host using IPV6 if available, but falling back to IPV4 if it isn't?