The variant found by OP is apparently the very last option that my tool generates. These days, Firefox is a bit boring (okay, okay, I'll admit it's a good choice for security) and translates these at the first opportunity. Even hyperlinks are translated on hover in the 'status bar' (if we can still call it that). For mobile users, this is what it shows when you paste one of those addresses in Firefox: https://snipboard.io/kbLTso.jpg
Octal is a great way to mislead both human beings and software, and I kind of hope it gets removed by browsers as a result of this new attention. It’s one of those things that isn’t productive or useful in any way in our modern era and serves only to complicate with no benefit in return.
I would refuse any PR that used octal unnecessarily, and I generally ask people to use chmod plus-addressing rather than octal where possible, so that there’s a lower risk of math errors and a higher chance of good review.
This is funny for me to read because I converted a bunch of constants for file permissions to octal values in the Linux kernel because they're harder to read and there's multiple ways of arriving at the same value.
You wanna look up what a constant is and you find
#define S_IRUGO (S_IRUSR|S_IRGRP|S_IROTH)
or you could see 0444 and immediately know what it means.
I agree in cases where you're modifying existing permissions it's much better to do a `chmod u+w` than to replace the whole thing with octal. When you're defining permissions at the time of creation though, everyone can parse 0644 at a glance.
a=rX,u+w isn't so bad but I don't know if I'd prefer it personally, in code where you can't use chmod syntax I'm definitely preferring octal.
Oh, I didn't know what X did and made assumptions.
If you split file permissions into two halves - defining the initial permissions / modifying something that already exists - I can see how it makes sense to do both with the same syntax since modifying existing permissions is worse with octal.
Doesn’t this just highlight that php development ecosystem doesn’t value quality much? What even is a “file” in context of a web request? What about dependencies or logic defined in other files?
This is just bizarre, I can’t see a sane codebase where this would be preferable to going on GitHub and pressing “.”
> Doesn’t this just highlight that php development ecosystem doesn’t value quality much?
As opposed to which web development ecosystem exactly? The only web development ecosystem with overall decent quality software that I could come up with is Java, and their understanding of quality is... enterprise-y.
Give me a mature PHP framework over a NPM dependency tree, python web framework, or ruby on rails any day. At least when looking for 'quality'. Relatively.
I agree with these points although going back to the grandparent comment, the value of being able to show the source code of a controller with these frameworks by appending a query string is close to zero.
On the contrary, doesn't it highlight that the PHP development ecosystem values simplicity? That is, a simple application (which this is) can be contained within one file, rather than something requiring several folders, dependencies, and an 'init' command?
I don't understand your criticism and I suggest you might not either.
The irony of thinking files and folders are too much for simple app and also praising a feature that is in direct relation to php’s MO of conflating codebase folder structure with requests’ path.
Edit: this reminds me, I was like this too at the beginning of my dev career, I also was completely in favor of this supposed “simplicity” of php, only much later, thanks to hickey’s nice talk I realized that I was confusing simplicity with ease.
Sorry, I don't understand your first point, even after reading it several times. I think I might have inferred what you meant by looking at the second (edited in) point, but I'm not sure.
Are you suggesting that it is bad that PHP applications often have a request path that relates to the folder structure?
In other words, are you suggesting that simplicity means an application should not have a request path that relates to the folder structure?
To give an example, are you saying it's a bad thing that example.com/profile/ loads /profile/index.php, rather than passing /profile through a single controller function to identify what code should be responsible for handling it?
The first approach actually seems pretty straightforward paradigm and it's what most new programmers would expect. Adopting a MVC/routes method is more complex and arguably overkill for a simple application.
If that is what you are contending, it should be said that PHP does not require this approach. Although it is often a preferred approach, because it doesn't depend on additional web server configuration.
I’m not suggesting, I’m saying that conflation is mother of confusion. Conflating request path with file path is not a great idea, especially for new developers that get a mental model of how web apps work that is completely irrelevant for the rest of their careers.
There's plenty of large PHP projects that adopt this paradigm. Is it really fair to say it will be completely irrelevant for the rest of their careers?
Also, let's not lose sight that this arises in a context of criticism of the model adopted for programming a simple form. This is just a simple one page form. More complex or abstract paradigms or design patterns is overkill.
You replace your “init” command with some VIMming of your nginx configuration.
The next on the line is that PHP doesn’t even need anything like version control because you can just copy files over SFTP.
PS: if your project is simple enough to fit in a single file I would argue that most of the time you may use absolutely anything (including a Google Sheet) and you would be equally happy with the results.
This program, "ip4dec", converts lists of IPv4 addresses to decimal and prints them as unsigned integers. Wrote this while experimenting with storing domain->ip mappings in a trie, such as https://github.com/tlwg/libdatrie
This program, "ip4dec", converts lists of IPv4 addresses to decimal. Wrote this while experimenting with storing domain->ip mappings in a trie, such as https://github.com/tlwg/libdatrie
It can't only be that, or 127.1 would not work. It is doing some parsing beyond just calling a parseInt on each of them in order to recognize domain names and use name resolution rather than directly putting the bytes in the IP header. That must be why 0x9000000.-16250872 doesn't work (if negative worked, that should also resolve to 8.8.8.8).
I looked into this a while back, IIRC BSD added the "omit zeroes" as a nonstandard convenience feature and other OSes copied it. I'm far afk for I'd find my notes on this.
It's not "omit zeroes". It's, "represent the last three octets with just one integer".
127.256 is valid, and would be equivalent to 127.0.1.0. The last integer can be up to 16777215 (2²⁴-1).
Similarly, if you instead include three groups of integers, the first two represents one octet each, and the last represents two. 127.2.256 is equivalent to 127.2.1.0.
I found this quite amusing as it seems as if Google is trying to impersonate Cloudflare's 1.1.1.1, whereas 010.010.010.010 is indeed the octal representation of 8.8.8.8.
"News" in Hacker News doesn't necessarily mean everything is new that comes up. Everyone might not know what you know, so sometimes it's interesting enough to end up on the front page.
Not sure what you mean about other websites, it works fine on Apache and Nginx, e.g. on my server:
curl -kiH Host:1348764566 https://1348764566
(-k flag needed because I didn't get a valid cert for this variant of the IP. One could also specify the fingerprint but let's keep the demo simple.)
It'll give you a 404 because of the unknown vhost, but it would also do that if you access it using the 'normal' dotted decimal notation: http://80.100.131.150
I used to detect this number actually and it would give you a small easter egg, but nobody triggered it and nowadays Firefox doesn't send it as a host header anymore when you specify the IP as such so I didn't check how to port that over to my new web server stack.
The usual reasons are given - protecting children and preventing other illegal activity, which is all well and good and commendable in theory. However there have been instances where the filter has been used to silence opposing political opinions, as well as prevent access to materials on subjective moral grounds (ie "hardcore" pornography, online gambling, discussion of suicides, etc) where the government has decided Aussies shouldn't do that sort of thing, which seems a bit puritanical and mildly thought police-y.
It's not like we're in an "actual dictatorship", by and large the representative democracy trundles along as best these things do, and the life and freedoms we enjoy in Australia make us incredibly privileged compared to much of the world. But this whole online censorship and thought policing our government seems fond of is something I disagree with. In addition to banning certain forms of speech and text, they're now pushing through an act that sets the stage for de-anonymising all users online with a government-issued "Digital ID", the next step presumably being making it illegal to provide and use anonymous web services in Australia. That has broad implications for things like Reporters Without Borders, corporate and government whistleblowers, etc.
Coupled with a historical record of every blocked or "suspect" DNS attempt, and these trends paint a dire picture for individuals who may have legitimate interests or even just curiosity about something like "how are drugs made." Handing this information to the federal government seems risky to me because I don't know what they're going to decide to make illegal to read and write about in the future. Our government has talked seriously about banning encryption many times over the years, and are currently at war against social media, so who knows what they'll do.
That doesn't mean I agree that people should get away with heinous acts or organised crime, but it's why I personally avoid using my ISP's DNS resolution in Australia. I don't exactly trust Google either, but I'd rather they deal with my DNS lookup than our technophobe government.
Sorry for the long rant, probably could have just left it at my first sentence, but it all touches on the one subject in Australian politics that really rubs me the wrong way, and most people I talk to here are of the mind "if you're not doing something wrong, there's nothing to worry about." Just, gah!
Short and easy to remember thanks to the classic 2600 zine (named after the Captain Crunch cereal whistle which emitted the 2600hz tone for payphones). I wonder if someone at Sprint is a fan.
4.2.2.2 is only for Level 3 customers (now Lumen). You will get mixed results and I believe they have hijacked NXDOMAIN responses for non-customers in the past.
This is the first time I've seen a certificate issued to an IP address. Cloudflare does the same thing for 1.1.1.1.
X509v3 Subject Alternative Name:
DNS:dns.google, DNS:dns.google.com,
DNS:*.dns.google.com, DNS:8888.google,
DNS:dns64.dns.google,
IP Address:8.8.8.8, IP Address:8.8.4.4,
IP Address:2001:4860:4860:0:0:0:0:8888,
IP Address:2001:4860:4860:0:0:0:0:8844,
IP Address:2001:4860:4860:0:0:0:0:6464,
IP Address:2001:4860:4860:0:0:0:0:64
I'm guessing this is in part for network device auth? DNS over HTTPS?
You can use this for any purpose. These certificates conform to PKIX and are part of the Web PKI if they're issued (as this was) by a trusted CA.
In some ways the actual rules for IP addresses are less strict than for DNS names. Perhaps this will get tightened up. Google Trust Services (the part of Google which issues certificates, as distinct from say, Chrome, which on behalf of Relying Parties has to decide if the certificates are trustworthy) expressed interest in issuing IP address certificates via ACME, ie automatically to anyone who asks. The pushback (including from people in other parts of Google) was considerable, even though what GTS proposed to do was actually more robust than what's technically required for issuance today. But it's nice that they asked (and indeed one argument to allow what they requested is, hey, there was no requirement for them to ask, if somebody had just done this without asking would we have been even more unhappy about that or would we let it slide?)
In practical terms, you likely don't get and don't want certificates with ipAddress SANs in them. You probably don't get them because (unless GTS went ahead subsequently) this is a Special Request item not something your Certbot or acme.sh or whatever can get for you, and you probably don't want them because unless you're a DNS server people expect to type in a name, not a sequence of arcane numbers.
When I hover over the link, Chromium shows "https://8.8.8.8". And when I manually typed in "http://010.010.010.010", it converts it to 8.8.8.8. I also tried it with the integer representation of 8.8.8.8 and that converts it immediately as well. It looks like this is just the browser converting it. Am I missing something?
Only 8 organizations actually "own" IP addresses (AT&T, Apple, Ford, Cogent, Prudential Insurance, USPS, Comcast, US DoD).
Almost all of IPv4 is allocated by IANA to Regional Internet Registries that in turn allocate them to customers like Google and Verizon. You pay yearly maintenance fees to keep the addresses assigned to you.
DNS log data is not used by Google for any purpose. Their privacy policy is actually stronger than Cloudflare's for DNS services.
The reason Google provides DNS should be obvious: when people experience a better web, Google makes more money. ISP DNS fuckery is bad for users. Since Google already needs to cache the DNS for its internal purposes, presenting it to the public as a service is close to free for them.
The surveillance is additive (unless you use). Since DNS traffic is not encrypted, the ISP still sees everything even if you switch it, and now so does Google.
The practical benefit is that some ISPs run bad DNS servers that e.g. automatically redirect nxdomains to their spam pages. If you use Google or Cloudflare you can bypass this particular anti-feature.
>The practical benefit is that some ISPs run bad DNS servers that e.g. automatically redirect nxdomains to their spam pages. If you use Google or Cloudflare you can bypass this particular anti-feature.
As I discussed here[0], my goto DNS server is 192.168.xxx.91.
Which is to say I run my own recursive resolver. This avoids ISP DNS server issues as well as other issues (like these[1][2]). Also, Google/Cloudflare/whoever don't get to log my DNS queries.
Since ISPs generally see DNS queries from the gateway and not individual hosts, wouldn't your ISP still be able to see those requests?
AFAIK, the only way to prevent your ISP from collecting the domains you visit is if you use something like dns over https. Even then, you're tls connection leaks the domain via sni (hopefully this hole will get plugged by tls 1.3).
>Since ISPs generally see DNS queries from the gateway and not individual hosts, wouldn't your ISP still be able to see those requests?
Of course. Just as they can see every other packet that comes out of my network.
>AFAIK, the only way to prevent your ISP from collecting the domains you visit is if you use something like dns over https. Even then, you're tls connection leaks the domain via sni (hopefully this hole will get plugged by tls 1.3).
Actually, they can capture or log all your network traffic if they want, not just DNS traffic.
As for DoH/DoT, that's a huge can of worms that I dislike immensely. Why? Because it uses tcp/443. As such, any device that I don't roll myself (roku, fire stick, etc.) could (and with wider adoption, will) perform their own DoH/DoT requests that I can't intercept with my network-based ad/tracking/spying blocker (e.g., Pi-Hole).
That means that blocking ads/tracking is going to become enormously more difficult, unless I block tcp/443, limiting my ability to connect to pretty much any website these days.
And I am much more concerned about that than I am about my ISP logging netflow[0] data, or even capturing all my packets.
What's more, they are extremely unlikely to do the latter. Even with cheap storage, capturing all my packets (and even just the hundreds of other customers that connect to my head-end, let alone the millions of customers they have) isn't economically (or likely even physically) viable.
That said, if you're afraid that your ISP might be doing so, I suggest using a VPN. Then they only see the envelope of the encrypted VPN traffic and that's it.
Given that most data is going to be encrypted anyway (https, ssh, etc.), the fact that they can see where I'm going (which they need to know anyway to route the packets) doesn't really concern me.
As such, if my ISP really wants to capture all my DNS queries and other network connections (assuming they do so for all their customers, as I'm not anyone state-level actors are interested in), they're going to need some ginormous data centers for all that data storage.
Yes, NSA has their ginormous data center in Utah, but they're pulling data from Tier 1 peering points and nothing I do will impact that -- not even using a VPN.
As I said, I'm much more concerned with ads/tracking/spyware, as that's much more likely to be tied to me personally, as those folks want to maintain the fiction that they can effectively "target" advertising at me so they can keep charging the advertisers more and more.
So unless you're someone who some state actor wants to mess with (in which case, you're hosed anyway), blocking the corporate ad spies is more useful than worrying about your ISP. I'd note that Google is one of the biggest of those spies too.
As such, I'm going to focus on a real threat to my privacy that I can actually do something about (which includes doing my own recursive DNS queries), rather than worrying about stuff over which I have no control.
For DNS, I am not sure. If you are talking about strange IP formats, not really. The best I’ve been able to do is some playing with the IPv6.IPv4 formatting.
Our software follows redirects now. Obviously that's not correct in cases like this; but it's so much of an improvement in other cases that I don't want to roll it back. Not sure what to do yet really.
Interestingly, Firefox canonicalizes such links to the decimal IP address: if you hover over it, you see https://8.8.8.8/ , and if you click on it that's where you end up.
I was just about to edit my comment: either that is the case or HN automatically runs a reverse DNS query to get the domain name associated with the IP address in the submission URL?
No, it's an IPv4 address. No TLD is allowed to be a series of digits in order to avoid any confusion about this.
Whether your URL parser considers that octal IPv4 addresses are a reasonable thing is up to each individual parser. On the whole I'd suggest user-facing software should not permit this because it's pointlessly confusing.
Rust took a patch that says if you try to convert (for example) 010.010.010.010 to an IPv4 address that's an error, which again I think is reasonable for the same reason.
In the patch feedback several people want it to mean 10.10.10.10 and others think it should mean 8.8.8.8 and eventually it seems to become clear to both groups that this is itself a terrible sign for their positions, since if you expected one but got the other now your software has unexpected behaviour, whereas if you got an error you can fix your program to do whatever it was you intended. So hence the error behaviour won.
[Edited to add: It has been pointed out to me that maybe the poster meant .google. Yes, that's a TLD owned by Google. They applied for, and received a number of "new gTLDs" from ICANN, some like .dev are open for you to register 2LDs in, others like .google are only for their own use. Running TLDs likely costs Google somewhere in the region of a million dollars per year to maintain, but that's a drop in the ocean for a large tech company.]
Not explicitly mentioned in that CAB/F document, the PKIX standard that makes ipAddress SANs work actually defines them as numeric types with a set number of bits, so an ipAddress is literally a 32-bit or 128-bit value.
This leaves no room for the ambiguity of the text rendering something like 010.010.010.010 in the certificate itself.
Likewise the dnsName SAN type is defined in an alphabet for X.509 that literally can't represent fancy Unicode, so you can't mistakenly write certificates with dnsName SANs that give the Unicode name instead of the unambiguous punycode name stored in DNS.
These two choices mean your browser can mechanically with 100% reliability check certificates in the Web PKI match the IP address or DNS name from the URL you believed you were visiting, whereas historically the abuse of "Common Name" features to write a human representation had nasty edge cases for both IP addresses and some DNS names.
Without getting into the existential question of what does it mean to be real, yes [0]. It's one of the sponsored modern TLDs[1], along with the likes of .horse, .cat (not what you think), .wiki, .club, etc.
> Without getting into the existential question of what does it mean to be real
I think what I meant was mostly
> will all or most DNS servers other than Google's resolve .google addresses
I didn't realize Google had bought their own TLD.
I'm not sure how I feel about the sponsored TLDs. I think I like them, mostly. I think I don't love how .google is centered on a single corporation in the same way that I don't like how .gov and .mil have always been so US-centric.
In a way it feels like an intrusion, or somehow misplaced
> using anything but your ISP's DNS decrease privacy
Using your ISP's DNS decreases privacy. I assume you mean that because UDP/53 DNS is unencrypted, if you switch to another DNS provider, then both the ISP and the new DNS can see your requests? In which case I present to you DNS over HTTPS
How can your ISP route the traffic if according to you it doesn't know where it goes? You need a VPN if you want to hide your traffic destinations from your ISP... not encrypted DNS
The DNS lookup would be encrypted. The IP would of course not be. This means the ISP would be able to see the IP but not domain [1] of your destination.
Basically every single site hosted on any CDN or other cloud host. What's the percentage of people that host their site on dedicated links? Probably a decent percentage, and probably the big 10 sites, but maybe not as many of the others..?
Though as mentioned this is moot due to SNI, in most cases :(
> Also, I remember reading something like HTTPS was leaking URLs...
My ISP's (spectrum) DNS is trash. Not only is it slow, it hijacks misses and redirects to their garbage landing page. And I'm sure they snoop in me and sell data all the same.
Not all users. ISPs around my part of the world do not behave as badly as US ISPs seem to. Most (although I actually believe it is all) ISPs in my home country do not do anything special to DNS lookups. This means there is literally no benefit to using an off-shore DNS resolver such as those provided by Google, Cloudflare, et al. It just makes DNS lookups slower for us.
It would not increase unless you are switching away from a third party DNS. Whether you use Comcast's DNS or not, they know the sites that you visit... If you use Google as your DNS, then Google also knows.
https://lucb1e.com/randomprojects/php/funnip.php?ip=8.8.8.8
The variant found by OP is apparently the very last option that my tool generates. These days, Firefox is a bit boring (okay, okay, I'll admit it's a good choice for security) and translates these at the first opportunity. Even hyperlinks are translated on hover in the 'status bar' (if we can still call it that). For mobile users, this is what it shows when you paste one of those addresses in Firefox: https://snipboard.io/kbLTso.jpg