Hacker News new | past | comments | ask | show | jobs | submit login

A subtle detail from the article is that address prices peaked in 2021 at $60 and has steadily decreased to $35. Where does it go from here? Is this a proxy for the tech correction?



Ideally the long-term value is $0 when IPv4 becomes irrelevant, but how long that takes, and the prices along the way, are anyone's guess. Even explaining past prices is analogous to stock market voodoo.


Not a fan of ipv6 evangelizing, much less at this point. Just give it up.


What is your solution?


> What is your solution?

Understand very little about the problem space and complain about the best-compromise solution that the people who do know what they're talking about came up with. It's a very comfortable position to be in, I recommend it to everyone.


I mean there's several existing solutions, NAT, ipv4 rationing, ip leasing, coexisting with ipv6.

Things look fine to me, the reality is dual stack, a full ipv6 transition is idyllic and pointless.

I'm certainly not the first ipv6 critic, and you may notice the nuance that I didn't advocate for not using it, I just don't advocate dropping ipv4. Furthermore it doesn't matter if I advocate it or not, like a train ipv4 keeps going, it's only ipv6 that is advocated.


Not OP nor a recognised network expert but here is my suggestion:

IPv10. The next IP version number that is unassigned, that is conveniently 4+6.

Basically something that is not breaking compatibility with IPv4 and doesn’t require those dual network stacks nonsense.


How do you not break compatibility with IPv4 while also getting more bytes in the address?


Maybe just don't? Let it be IPv4 with more bits, the software is already there so dual stacking isn't so bad, adoption might actually be quick if people didn't have to learn much to implement it.


>Let it be IPv4 with more bits

Then it's not IPv4 and is not compatible with IPv4.


It'd be compatible (or trivially close) to most of the software that is required to make it work. Which is to say the cost of implementation would be low - not the case with ipv6.


country codes </s>


Well again I’m not a network expert but perhaps we could look at the 240.0.0.0/4 reserved for future use block and add more address bytes in the payload or something. It’s not going to be elegant but IPv6 is kinda elegant and failed.


This is the most hilarious "I don't understand anything about the problem, therefore I don't understand how it's hard" comment I've seen this week.

> perhaps we could look at the 240.0.0.0/4 reserved for future use block

What's the current rate of v4 address space consumption? How long will this block last?

> and add more address bytes in the payload or something.

This is, by definition, not backward compatible.


I may not have been clear enough in my suggestion. The idea would be to use this unused block as a special block. Not to fill it up with normal IPv4 allocations.

See my suggestion as some kind of NAT-PT at scale. With a better marketing name and user experience.

The problem is indeed hard because no one manage to find a solution at scale since 3 decades.


No matter what change you make, or how you make it, if you are making more than 4B addresses routable then any existing IPv4 device will not be able to route some addresses, so you will have caused a split in the internet

This is a fundamental and unresolvable problem with "making it backwards compatible"


Wouldn't NAT be an existing and well used solution to this problem?


Even if we accept that NAT is the right solution, it still is pretty limited in how far it has been able to extend the address space, since port numbers only give you two extra bytes of address space. And there are no further extra bytes to stuff somehwere else in a TCP or UDP packet header.

Of course, we could extend the address space by further breaking the layering of routes, and baking in support for higher layer protocols into routers. We can certainly stuff more address information in HTTP headers, so the web could be extended to essentially arbitrary size by simply requiring routers to look not just at source and destination IPs and source/dest TCP/UDP port numbers, but also client and server HTTP headers. SIP looks a lot like HTTP, so the same solution could work there. TLS already has support for additional headers, so we could also do extra NAT at that layer.

Hell, AWS could then use a single IPv4, and just rely on HTTP/SIP headers or TLS extension headers to know the actual destination! Of course, if you want to run another L7 protocol, tough luck - tunneling it is for you.


Yes I agree you would need to tunnel because the headers aren’t big enough.

If I had to guess the futur, the industry will most likely go towards something like few expensive IPv4 owned by major cloud and internet providers and crazy recursive NAT setups everywhere. Because that works without breaking stuff.


NAT is the problem that IPv6 fixes. Think about the parent comment

>if you are making more than 4B addresses routable then any existing IPv4 device will not be able to route some addresses, so you will have caused a split in the internet

This has basically already happened. We've massively extended IPv4 by stuffing extra address bits into the router's port number, and it means that any two devices behind NATs can't directly route to each other.


NAT has been more successful than IPv6 at fixing the same issue, the shortage of IPv4 adresses, but without breaking compatibility (well at the cost of crazy hacks for weird protocols such as FTP).

Not being able to route directly doesn’t seem to be a major issue to me. It for sure require more computing power in routers but also adds some safety and privacy by design.


> Not being able to route directly doesn’t seem to be a major issue to me.

Look at the bigger world around you.

I am, right now, involved in a major cloud migration. Having overlapping, constrained RFC1918 space and also having to NAT everything is presenting an enormous set of constraints and risks. It adds literally zero benefit.

Life would be infinitely easier, and we could provide so many more capabilities if everything could just have a routable IP address. Unfortunately, I'm not in charge of our addressing policy.

NAT is an awful, short-sighted hack that causes many more problems than it solves.


NAT doesn't fix the issue, it works around it. It means that hosting a home server costs extra money to get a static IP.


> The problem is indeed hard because no one manage to find a solution at scale since 3 decades.

The problem is hard because despite everyone's wishes, it's got nothing to do with technology. All migrations are about economics and incentives, IPv6's qualities as a design (it's a long, long way from perfect, but I'd argue that it's good enough) are irrelevant.


Yah, it certainly seems like maybe that was peak pricing. This write up has some more data on historical pricing https://www.ipxo.com/blog/ipv4-price-history/ I've also heard folks pay quite a bit over the average price for novelty IP addresses, so perhaps that skewed the data? I'd love to be able to buy 2.2.2.0/23 or my favorite 42.42.42.0/24


Yeah, one example is Cloudflare and 1.1.1.1; though the story behind that is less about money and far more interesting. Apparently, APNIC had owned 1.1.1.1 for, basically, forever, but were never able to actually use it for anything because it caught so much garbage traffic. Cloudflare is one of only a handful of service providers that could announce the IP and handle the traffic; so in exchange for helping APNIC's research group sort through the trash traffic, Cloudflare hosts their DNS resolver there.

https://blog.cloudflare.com/announcing-1111/


I would really like to see the results of this research to understand what is going on there.



That’s pretty cool. I’d never though about bogons and debogonizing before, it’s like chasing off all the squatters on your property and more keep coming. You need some fat pipes and beefy servers to be able to handle all the bogus traffic of machines trying to hit your server, and also be able to actually fulfill your purpose.

Make sense now why Cloudflare would be one of the only companies that could handle it!


They only had a 10mbit link. Apparently 50mbit/s was the amount of traffic they received.

Mostly everyone could handle this, not just CloudFlare.


CloudFlare reported 10Gib/s when they first switched it on. The 10Mb/s link was deliberately limited.


    The last public analysis was done in 2010 by RIPE and APNIC. At the time, 1.1.1.0/24 was 100 to 200Mb/s of traffic, most of it being audio traffic. In March, when Cloudflare announced 1.0.0.0/24 and 1.1.1.0/24, ~10Gbps of unsolicited background traffic appeared on our interfaces.
https://blog.cloudflare.com/fixing-reachability-to-1-1-1-1-g...


So, what happened to everything that expected 1.1.1.1 to error out and now is getting something?

(not worried about them, just curious)


Yeah it broke my use case. I used to run `curl --retry 9999 http://1.1.1.1` and since it didn't exit, the heat generated by the running curl process kept me warm in the winter. But now http://1.1.1.1 returns immediately, so I'm freezing!


You're obviously a fellow fan of 1172.[0]

[0] https://m.xkcd.com/1172/


I mean, for smaller routers that had static routes set for that subnet, it would probably just keep working - the issue being that trying to get to real addresses in the 1.0.0.0/8 network (or parts of it) wouldn't work.

If you were BGP peering then you'd probably get a real route into your local table though.

So yeah, some stuff would probably have just broken, but that's the risk you take using parts of the IP space you shouldn't be using!


Well, a lot of Cisco wireless engineers had to reconfigure their guest wifi captive portals.


Heh :)

To be honest I feel as bad for them as I do for Hamachi, when their (otherwise quite nice in that it was a spiritual predecessor to Tailscale!) overlay VPN service fell apart once 5.0.0.0/24 became publicly assigned.


Which is funny because half the time I still end up manually typing 1.1.1.1 and praying for a redirect...


They switched to 2.2.2.2


IPv1 is a cozy place


Possibly. It might also have been a bit of a panic because RIPE ran out of IPv4 addresses around that time and it was unclear back then how liquid the transfer market would be.


RIPE ran out in 2012, just after APNIC in 2011.

RIPE had a policy to extremely rate limit allocations from their last /8, which is how they were able to continue allocating for an extra 7 years. The other RIRs had no such policy.


IPv6 usage also went up 10% in that time

https://www.google.com/intl/en/ipv6/statistics.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: