Hacker News new | past | comments | ask | show | jobs | submit login
Reverse DNS IPv4 Map (reversedns.space)
202 points by elisaado 5 months ago | hide | past | favorite | 50 comments



Thank you for posting!

There is also a presentation about it: https://presentations.clickhouse.com/meetup85/app/


Cool display!

Small feature idea: "find my ip" which zooms to/selects the apparent ip of the current visitor.


Ha, boring :) I just tried to find a couple of addresses I use/know on the map and it was a nice challenge.

For octets closer to the center of their highest order square it was quite easy by just trial and error. But for octets at the edge of their square like e.g. 149 I admit having used pen and paper...


On this slide: https://presentations.clickhouse.com/meetup85/app/#32

It says the the intent was to make it similar to https://xkcd.com/195/ , i.e. with a space filling curve that preserves grouping, but actual implementation doesn't seem to do that. For example, the upper left squares are:

    0 1
    2 3
As opposed to

    0 1
    3 2
Also, 127.0.0.1 is near the right center edge of the map, while 128.0.0.1 is next row down near the left center edge.


For the visual patterns on the map that does not make a major difference.

* Obviously for all of the same color it makes no difference

* For all 4 of different colors the pattern still does not change, it's just randomly colored

* In case of 2 colors it's highly likely that the first half will have one color and the second one another. Nothing changes.

* In the less likely case the 3 have one color and the remaining one is different the resulting pattern is a triangle. So the direction of the diagonal will flip. But I don't think the overall impression will change.

In higher level squares replace "color" by "pattern", still no significant changes in appearance. Of course the two lower squares always swap their location.


Maybe the image would look just as random if a space filling curve were used, but I was hoping it would resemble the XKCD image somewhat.

I have tried to rearrange the pixels to see if there is any resemblance, here:

https://gist.github.com/uguu-org/5dadec83394d9ba5ea88e0f6e25...


I accidentally stumbled on your project just yesterday evening after reading your comment here https://news.ycombinator.com/item?id=39792381 (Hivekit, hexagons and Hilbert spaces). I posted it on a network operator IRC channel, and now it's on HN again, nice to see how that works.

Something to add, perhaps, is some sort of map marker and compass per zoom level? What part of the IPv4 space am I seeing, and what are the neighbours? Perhaps you've seen https://map.bgp.tools/ as well?


I want to make it updateable, so we can see how the DNS records change over time. Also, I want to map various scans, similar to https://blog.benjojo.co.uk/post/scan-ping-the-internet-hilbe...


thanks for the pointer to https://opendata.rapid7.com/ that has all kinds of fun things in it!

> Total size: 77.7 TB

biiiig daaaaaata


I can see my house from there!


Nice! A public IPv4 is basically the equivalent of having a swimming pool in Internet real estate these days :)


The MIT is bigger than most countries and looks interesting: https://uwe.iki.fi/public/mit.png


Also goes to show how much is taken up by AWS... but I see a distinct lack of cloudflare.


Sure. But AWS has several "smaller" massive blocks in several places, no such rather regular patterns like MIT in single network. Whith a few exceptions for those adding their own reverse entry and some smaller cloudfront blocks.


It would be cool if there's a blog post on it.


I clicked on a random one, and it happened to be the website of a business a mile away.. What are the chances?


What (if anything) do the colours of the non-black squares represent (or is there no scheme)?



Oh what a great way to connect burner mining drills ok coal in factorio!



Is SoftBank buying up ips as an investment?


One could draw art onto this, by using different domains to color it. Would be expensive though.


Appears some blocks near (but not in) RFC1918 space are missing?


I was kinda hoping that the biggest patch would be Aws


Why is there a giant blackout after 224.?


multicast, then experimental blocks


Yes, I noticed that recently when writing a unit test with with randomly created IP addresses. A significant part where deemed as non-routable by the code under test, so I had to limit the first octet < 224.

But is that huge allocation really extensively used in real life? How?

Or could a significant part just be reallocated for new unicast usage?


The 224.0.0.0/4 multicast space could probably have been made smaller, at least down to a /8 if follow on standards had been written with that kind of size in mind from the beginning, but at this point that'd be like saying "We're going to change 10.0.0.0/8 into 10.0.0.0/16 to free up space everyone, let me know when you've all stopped using it. Thanks!". The space is already in sparse use in corporate networks around the world, you're not going to get everyone to just up and change internal networks to fit less sparsely. If it were that easy IPv6 adoption would be 100% instead of 45%.

240.0.0.0/4 could conceivably be assigned. It's not really in use as it was actually reserved for "future use" from the beginning. That said, if you want to use that space publicly in any reliably usable form you've still got to convince near the entire internet to update their stuff to support/allow it. On this front I'm actually kind of against opening it up even just for internal use as it'd just create another headache to check for and not be particularly reliable. For "extra internal space" 0.0.0.0/8 was in a similar situation and already opened up. If that's not enough for you then you desperately need to move on from IPv4 already.


Well that's true. Probably there are users that use it in violation of the spec, relying on that it would not harm.

Was it 1.1.1.1 that had quite some problems in the beginning of their operation or some similar one? I vaguely remember reading a blog post at the time.


For 240.0.0.0/4 it's not as much existing users violating spec as existing in-spec software and hardware not allowing it. E.g. even if you patched your Linux box and DHCP server to support 224.0.0.0 your hardware router might not forward the packet between zones, your Windows clients might not accept the assignment. In the public case your ISP might not accept it in their router hardware or filters and even if they did it doesn't mean the other 100,000 entities on the internet you're trying to talk to/through do. The same is all true with 224.0.0.0/4 as well plus the fact there is existing in spec use for multicast.

1.1.1.1 was never reserved but it was unassigned until 2010. By that point it had been used improperly so much it received massive amounts of garbage data when advertised (and still does to this day). It's just that "massive" turns to "quite tiny" in context of a giant CDN like Cloudflare so they were able to salvage it.


Yes it was 1.1.1.1. I remember the initial blog post. Before even turning DNS on they just monitored traffic patterns and types to make sure they could handle it.


That’s actually exactly the reason why Cloudflare got it. They were the only ones at that point who could handle all the garbage that was sent to it, and willing to deal with it at their own expense.


> could probably have been made smaller

A lot of reserved ranges could've been made smaller, 127.0.0.0/8 is JUST loopback, that's over 16 million ips just for loopback! 0.0.0.0/8 is also just absurd

224.0.0.0/4 and 240.0.0.0/4 are also crazy... over 500 million ips.

I probably wouldn't care about it if we didn't have ipv4 exhaustion (which is in my opinion is at least partially the US govt's fault, because they're hoarding 200+ million ips)


More than 40 years later it can seem so. But with these you were basically able to just check the first byte, perhaps it was an optimization of some sort. Reserving ~1/8 of the available address space for other use cases or future needs is reasonable and not out of line. The ASCII encoding also uses just 7/8 of a byte to encode letters and thanks to this we were able to make UTF-8 compatible enough. IPv4 could have been 48 bit just like the MAC and we wouldn't have this conversation. Nowadays a /48 prefix (in IPv6) is basically the smallest we hand out.

Of course, we could've used /80 as the smallest possible prefix for auto-negotiation, leaving more room for playing with prefixes but it is what it is. Nobody sane will wait another 20-25 years before any kind of change is widespread in all the stacks. Even less people will care about the reserved/multicast space, 0/32 or 127.0.0.0/24 because it is so much work for little benefit as it will never be supported by all the legacy systems that care about IPv4 since for them there is no IPv6. We should all concentrate on IPv6 and get on with the colossal migration. Even HN supports IPv6 as of this year!


Thanks for creating an account just to answer my question, I guess.


The solution to IPv4 exhaustion. It is basically unused multicast and "reserved for future use."

Only thing standing in our way is the IPv6 proponents who know address space exhaustion is the only thing that will drive adoption of an otherwise shitty idea.


Absolutely not the solution, a temporary relief at best. Better to bury this idea.


This is the force exhaustion to drive adoption approach.

If your family is starving do you look over at a plate of sandwiches and say "we better not eat those, it is just a temporary relief" or do you use the available resources to solve the problem.


Can you elaborate on why ipv6 is so shitty?


I'd assume legacy support isn't there for IPv6. At least for our environments, it wasn't feasible.


Could you please elaborate, what were some of the major blockers for you?

In some cases, you can "just" add a reverse proxy in front of the legacy services or use some kind of NAT64/ DNS64 setup on the server side. Internal systems can expect IPv4 only for addresses for some kind of accounting, configuration etc. But internal systems that you cannot evolve to support current requirements are a burden anyway. There might be other debt that keeps getting postponed because of these things.

I have had a client tell me that some services cannot get a different IPv4 because they don't know about all the things that only know it by its IP instead of its DNS name. I am pretty sure that as a man made software system their system could definitely switch to a different address with some preparation but I am not going to push for it too hard. (It would allow improving the segmentation of their network in turn making it easier to firewall things in a simpler manner. Also, with L3 switches now commonplace it is no longer a question of performance.)


Now do IPv6


Yeah, just go ahead, scan PTR records for some mere 2^120 addresses.

Even if we scan just the first address of each /64, it’s still about 2^56. Unlikely anyone is ever going to do it.

This is another thing I like about IPv6. Makes mass address scanning completely useless.


I'm currently mass scanning IPv6, so are others. v6 results have been on Shodan for I think 7 or 8 years at least?


Are you aware of what SLAAC does? For the most part your scan results are going to be useless in <24h


https://www.rfc-editor.org/rfc/rfc7707#section-4

But in general devices using SLAAC are not typically the things you are looking for when scanning.


Hosts with randomized addresses are likely to have auto-generated PTR records, or none at all, so for the purpose of rDNS resolution those are not a big issue.

And that’s a detail, but SLAAC as in RFC4842 is deterministic. The randomization is introduced by the privacy extensions in RFC4941.


How large of a prefix are you scanning and are you preseeding your scans?


Neeeerds


You are on the wrong website if you think this is nerdy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: