Something that I’m surprised a lot of devs don’t know; there are official domains you’re supposed to use for documentation, testing, etc. They are specifically reserved by IANA for these purposes. Originally I think it was just example.com, but they now have a list of all them: https://www.iana.org/domains/reserved
Indeed. I've owned `invaliddomain.com` for almost 20 years. You'll be surprised how many use it for testing. One morning I woke up to 30,000 e-mails from Sony Japan with PDFs attached of scanned hand-written part orders. Something similar with Boeing sending me backup notifications. I notified each of these companies about their configuration through their official channels, only to be told "no, it's your server doing this" then usually followed up with an e-mail a few weeks later along the lines of "sorry, our bad". So, if you're testing something and using a test domain, use the IANA reserved domains, please. Theses were the days when I was running my own servers. I don't see it as often now as my e-mail is now hosted.
.app, .dev, .prod, and .zip all had substantial volume of problematic traffic that was discovered during the Controlled Interruption period (which occurs prior to launch and consists of a wildcard DNS entry placed on the entire TLD). You would not believe some of the brokenness that was happening there. .zip may need some explanation -- apparently there are lots of library API calls out there that take a path string as input and try to load it as either a local or remote file. You can see where this is going.
There's a lot of overlap between file extensions and TLDs though. .py, .sh, and .app are some more examples (but a fully exhaustive list would be in the dozens if not hundreds). At some point you have to just treat them as the separate namespaces that they actually are (and not somehow try to block a TLD from being used as a file extension, or vice-versa).
Besides, accidentally resolving a file extension to a TLD is only one of many possible different serious errors that can result from exposing an API that can load files locally or remotely, and thus make network calls that you might not be expecting. Fundamentally you need to fix that API either way.
I think it's okay. They are different namespaces and people should fix their bugs. It would be like skipping the street number "911 Foo St." because "911" is the emergency phone number and 911 Foo St. is just a regular house. I'm sorry if someone is confused but they shouldn't be confused. They're totally different things.
The biggest mistake we made with DNS was the "shortcut" of implicitly adding the root domain to random strings treated as domain names (turning "example.com." into "example.com"). The file "foo.zip" and the website "foo.zip" wouldn't even be ambiguous if we called the website "foo.zip.". "ndots" also causes operators of DNS servers a lot of pain -- some malfunctioning program tries to resolve "example.invalid" in a tight loop and it balloons to asking for "example.invalid.", "example.invalid.local.", "example.invalid.cluster.local.", "example.invalid.svc.cluster.local.", and then DNS blows up, breaking everything.
> It would be like skipping the street number "911 Foo St." because "911" is the emergency phone number and 911 Foo St. is just a regular house. I'm sorry if someone is confused but they shouldn't be confused. They're totally different things.
We just never should have allowed filename extensions to have semantic power. Resource forks are far more elegant and you could do simple look ahead checks to verify types etc.
Back in '81 when MS-DOS came out and ascribed "magical power" fo files ending in ".com", DNS didn't exist (except possibly as experiments stuff between a handful of university or military mainframes). It was also an era where the vast majority of users of those .com files were doing so off 5 1/4" floppies (or possibly even 8" floppies), and the very idea of adding "resource forks" to your on-disk data would have been ridiculed as an outrageously profligate waste of extremely valuable disk storage resources.
It's easy to say "we should never have" in retrospect. But you're basically accusing programmers back in the late 70s of having insufficient foresight to see problems that would make their decisions seem bad almost half a decade later.
We also should never have let companies sell cigarettes. Or burn fossil fuels. Or start social networks.
The magical power of extensions actually predates MS-DOS -- both CP/M and TRSDOS did the same thing in the late 70s, and I'm pretty sure both of those were inspired by mainframe operating systems. (I was a TRS-80 kid, and didn't know until much later why our equivalent of batch files had the extension "JCL".)
Having said that, though, we could at least wish that some variant of the Mac's old idea of separate creator and document codes stored as metadata had caught on. Sure, it'd have been a few more bytes per directory entry (to be specific, five more bytes!), but it was a lot more flexible -- and if more operating systems had been built with that "document types are metadata" idea, that metadata could have been replaced with MIME types later on, like it was on BeOS.
> But you're basically accusing programmers back in the late 70s of having insufficient foresight to see problems that would make their decisions seem bad almost half a decade later.
The URL standard was published in 1994.
Sure, in a vacuum you could read view my comment as meaning in all time, but given that the parent comment is discussing ICANN domains and my comment was relevant to that, I think it’s a little uncharitable to do so.
I've personally run into the .zip problem thanks to browser's omni address/search/chocolate bars. I intend to search for a zip file whose name I know but the browser "helpfully" realizes the term includes no spaces and ends in dotsomething and attempts to treat it as a URL.
A simple workaround is to add a preceding space or something like inurl: but that's isn't an automatic behavior so whoever owns mlpdwarfporn.zip is getting a lot of unintentional hits.
Using a question make in front of the term is traditionally how some browsers trigger search. They usually have a jeybconbe to add it automatically.
For example, in firefox and chrome ctrl-l will clear the URL bar and put the cursor and focus there to take you to the location you enter, and ctrl-k will do similar but pre-fill the location bar with a preceding '?' so a search is triggered on the input.
These shortcuts have existed for quite a while. They used to just focus the respective separate input boxes, when it wasn't all done through one.
Yes, thanks. Switched to a new keyboard on my phone, and I'm getting some different typos now. Not necessarily more, I always seem to have a lot that slip through, just different...
I think that ends up as user unfriendly as requiring the dot after the TLD. I don't read up on all the gTLDs so I didn't realize zip was one for the longest time. I think ICANN just went nuts with TLDs, especially ones like .app and .zip that have long-standing associations with ubiquitous file extensions. That combined with the "smart bar" just leads to trouble.
This is the real answer here. It's widely believed that domains working the opposite way of filesystem hierarchies and even URL paths was a huge design mistake. Think of how many billions of dollars have been lost over the years from account compromises resulting from people not correctly distinguishing yourbank.com/account/foobar from yourbank.com-account.info/foobar.
1. The reason for the change is generally thought to be email: you read least specific to most specific from left to right, like non-American dates, e.g firstname.lastname@group.department.university.edu
2. also I think ! was used rather than . for some networks at some times. Certainly I’ve heard this anecdote from some early users of the internet (or maybe other computer networks)
3. Traditionally the dot is the zero-level or extra-top level domain, a bit like / for root in the Unix file system. Indeed if you use dig you may be familiar that it writes domains like “www.google.com.” My web browser only partly seems to accept this format.
> 2. also I think ! was used rather than . for some networks at some times. Certainly I’ve heard this anecdote from some early users of the internet (or maybe other computer networks)
! was used for manual routing from the source to the destination, when messages were copied from server to server via UUCP. People would write paths like ...!ucbvax!deptserver!myname , which means "you probably know how to get messages to ucbvax; from there, here's how to reach me".
Interesting, and makes sense, seeing as how the Web didn't yet exist when domain names were first invented, but email did. It does make sense that they were thus represented in a manner catering to that use case. In hindsight I guess we'd prefer if emails were backwards too, e.g. com.gmail@cydeweys
This is ironically what’s become popular for service-specific usernames (“on twitter @dan” and “on instagram @danny”). It’s arguably more useful than the email notation because prefixing with the @ is a clear sign that a username is coming next.
You could easily just use a different symbol, or pronounce it differently. We think it makes sense the way it is now, but that's only because we've all collectively gotten used to the meaning it has in email addresses. If it had had a different meaning all along then that would make sense to us.
com.gmail.cydeweys doesn't really work because you need some way to distinguish that as an email address and not just a subdomain on com.gmail. So it really does need a different delimiter, like how you need a different delimiter in URLs when it switches from the domain to the path (and then also to the querystring and fragment).
'@' was used to mean 'at' or 'each' for a long time before email addresses. Often in pricing. '12 @ $3' means twelve units priced at $3 each. It's referred to sometimes as 'commercial at', including its Unicode name.
It wasn't remotely as familiar to a broad audience back then as it is now, though. Considering that people accept its non-idiomatic use for website user names even now (which is the same as what I'm proposing no less), I don't think this would be a problem.
Using your example with left-to-right domains and needing a delimiter, one could build a URL structure for mail that changes the protocol but not the path.
mail://com.gmail/localpart to send mail to what is currently localpart@gmail.com
Alternately, we could continue to use '@' and put the localpart first. The URL method of specifying a username or username and password does this for HTTP/HTTPS.
The domain name should definitely still be first. And there are benefits to using a different syntax for domain names than for email addresses, so you can differentiate the two when they're presented out of context (e.g. on a business card or ad).
I was mostly just making a joke about the suggestion that email addresses should look like "com.gmail/cydeweys" from upthread...
But I could _kinda_ of make an argument for it. When I open my browser, I want to go to "the homepage of the company running New Your Times, so homepage@com.newyorktimes - or perhaps the menu from French Laundry - menu@com.frenchlaundry
> 3. Traditionally the dot is the zero-level or extra-top level domain, a bit like / for root in the Unix file system. Indeed if you use dig you may be familiar that it writes domains like “www.google.com.” My web browser only partly seems to accept this format.
The browser actually handles it just fine, but some webservers refuse to handle it, despite it being a spec. Most famously Traefik and Caddy as webservers refuse to support these, just because their devs don’t understand the spec and think they know better (what a surprise, both are written in go after all)
This actually causes quite a bit of trouble when you have internal systems resolving pretty much everything as local domain first, and some external webservers don’t support the spec. Most famously, caddyserver.com. itself is broken
My college's online system for everything is called edukacja.cl. A lot of people type that into the address bar, expecting to be redirected to the login page. Fortunately, the person who bought that domain didn't misuse it. I can imagine someone putting a fake login page there, and getting access to a lot of sensitive student details.
I owned forexample.com for 15 years or so and saw all kinds of mail but the most persistent was a record company owner who, from time-to-time, wrote semi-deranged angry emails demanding that I turn the domain over to him. I always had grand plans for the domain but never acted and, a couple years ago, I forgot to renew and the domain is now in someone else's hands. I don't miss it, especially the record company guy.
I remember reading - long time ago - a story of a developer who used http://xxx as a placeholder for unknown domains, until at some point the browsers started resolved single-word links into www.<word>.com... :-o
Back in the days before I'd ever heard of multicast DNS or zeroconf networking, I had a local dns server set up with all our subdomain.ourdomain.com duplicated as subdomain.ourdomain.local and pointing to our local dev/staging versions of our websites. It worked wonderfully, until I think MacOS 10.2 arrived (so, like 20 years ago almost) which had mDNS support for the first time and "broke" it all on me...
I switched to using subdomain.ourdomain.staging instead and got on with life. I wonder if anyone's gonna have to deal with the fallout of that decision when someone oneway pays ICANN enough money to own the .staging TLD?
(I wonder how much "interesting" stuff would land in your mail/web/ssh/whatever log files, if you registered .staging and .dev and just logged everything that came past (or intentionally/actively honey potted everything there?)
It doesn't seem that the browsers still to the expansion things; I've tried it, and am just getting the "We’re having trouble finding that site." error message, or similar. But for some time that expansion was real.
Haha. That’s actually surprising, I mean that one takes some work to even type. I’ve mentioned previously on HN that I own doesnthaveone.com, which is constantly bombarded with random crap. I wish I had some big public customer data to see what other fake ones show up.
On the other hand, you knew what you were getting into when you decided to be the owner of a meme domain! People should use properly reserved domains, but I can't really blame them for accidentally using meme domains.
I had my.homepage.com for a while back in the early 2000s, unfortunately I wasn't allowed to monetize it, but looking at the referral logs was always interesting.
Indeed it's probably broad enough that you could likely find an ambulance chaser who'd go after people who _send_ you those emails "in excess of their authorized access" to your mail server.
You could weaponise this the same way companies use defensive patents... "Sure, I opened one of your emails, but you've connected to my mail server without authorisation <checks logs> 27,943 time so far this month. Go on, lawyer up. Bring it on!"
Thank you, I was unaware of this. I found the relevant section in the doc that was linked from your original link:
2. TLDs for Testing, & Documentation Examples
To safely satisfy these needs, four domain names are reserved as
listed and described below.
.test
.example
.invalid
.localhost
* ".test" is recommended for use in testing of current or new DNS related code.
* ".example" is recommended for use in documentation or as examples.
* ".invalid" is intended for use in online construction of domain names that are sure to be invalid and which it is obvious at a glance are invalid.
* The ".localhost" TLD has traditionally been statically defined in host DNS implementations as having an A record pointing to the loop back IP address and is reserved for such use. Any other use would conflict with widely deployed code which assumes this use.
I learned about .invalid last year and had an immediate use for it where we needed a syntactically valid email to match a schema but didn't want it to be deliverable.
I discovered quickly that some other systems wouldn't accept the placeholder emails such as notused@email.invalid. Too many systems try to be too smart about the syntax of emails (+ subaddressing is another minefield).
Had to go back to using something like notused@invalid.toplevel.com
I'm still not sure if this is because the developers are incompetent and don't understand that they can just used an established standard instead of rolling their own janky parser, or if it's because they just don't want to let the user tell who had their databases leaked or sold their email to a spam list.
I used to think the former because there's so many different solutions to the latter that don't involve actively annoying the people you're trying to extract money from, but I'm starting to think that it's a little from column A and a little from column B.
A lot of people learned this the hard way when Google bought and later enabled permanent HSTS for the .dev domain (prior to actually publicly releasing it) in Chrome, breaking everybody's non https local .dev environments.
The HSTS preloading fortunately ended up being an unintentional additional type of Controlled Interruption period, which was a good thing in the end. It would've been a lot worse if, one day, your fake domain names are resolving locally, and then literally the next day it's now a real domain name that's resolving remotely, with who knows what result. This at least forced people to address it well in advance of domain names potentially resolving globally.
Source: The horse's mouth. I'm the guy who came up with the idea of launching .dev and .app as HTTPS-only TLDs, and I'm the one who had them added to the HSTS preload list.
It's not a huge change though. .dev was a new, never-launched TLD, so there were no existing real domain names to break with the addition of HSTS preloading. Established best practice for decades at that point was already to always use real domain names (or subdomains thereof) or specifically reserved test domains/TLDs (see RFC 2606, published in 1999) for testing/development/local networking purposes.
So yes, we didn't anticipate how many people weren't following the best practices, but that would have been hard to determine prior to doing the thing anyway. There were also lots of people who had the mindset of "We won't change anything until it stops working", so in some sense a lot of it was unavoidable. See e.g.: https://github.com/laravel/valet/issues/204https://github.com/laravel/valet/issues/294https://github.com/laravel/valet/issues/431 (note that Laravel users were responsible for a non-trivial fraction of the total problems experienced, and that we only discovered all this post-HSTS-preloading). The problem was repeatedly pointed out and the maintainers refused to fix it until it actually broke. So, inevitably, it broke, and then they fixed it.
Were you somehow unaware of the many, many, many people who used .dev in their local environments? You must've had some idea, since the initial plan was to use the .dev TLD for exactly that within Google.
I've always hated Google for egoistically claiming this tld, and ICANN for letting them.
It enables zero config local development configuration, whereby at the time it would make your locally running dev server available on .dev
So instead of having to spin up a local server and then visiting say 127.0.0.1:3000,
I could instead just visit myappname.dev and it would show me what would previously show on localhost or it would spin up a server first for that app then show it to me.
They switched to .test in response to google’s change.
Yes. Not nearly as many people were using .dev in local environments as you seem to think. We didn't know anyone, and by the very nature of them being fake and locally-configured-only it's not something you can easily find out about. And no, our intention was not to use .dev for fake domain names.
Also, just because someone is using a fake domain doesn't preclude that from being created as a real domain name farther down the line. That's why you shouldn't use fake domain names. This problem has been known since at least the 90s and is not a good habit to get into. Them now being real makes them actually more useful (and not reliant on potentially unsynced local-only config).
My problem was that it also broke properly configured domains. We have machines in xxx.dev.example.com (for example) that I used to rely on the resolver searching for, so I could just type "xxx.dev" and it would then know to try it with example.com banged onto the end. Then everything in .dev started resolving so my abbreviations started resolving to other hosts.
I mean, I THINK this is "properly configured", but it also isn't a huge deal to avoid. Just was annoying when it started happening. Didn't FEEL like I was misconfigured. :-)
If your machines started exhibiting behavior you didn't expect or intend from a configuration you believe to be proper, then something has gone wrong. It may be the system, it may be the configuration, and it may be your understanding of either. Or any combination.
In this particular case it sounds like your resolver was set up to try an external resolution first and then append an internal domain if the external resolution failed. And as you say, this clearly worked just fine for a long, long time. Then it suddenly started failing one day, for reasons unrelated to anything you changed.
At this point, most people would find it reasonable to blame the external change for breaking their fully functional, correctly working, "properly configured" setup. Some, perhaps contrarian or perhaps more cautious, would note that the "properly configured" approach only worked so long as external systems played ball. I think it might be the case that you were bitten by this assumption that seemed safe at the time, leading to the awkward and uncomfortable conclusion that your systems were indeed misconfigured.
And also, from a larger standpoint, we have a choice of two mutually exclusive options here. It comes down to either (a) freezing the TLD hierarchy in place and never creating any new ones (e.g. not even for a new country, or .mars or whatever), or (b) not retaining indefinite backwards support for multi-label DNS search lists when they happen to collide with new TLDs.
I would argue that, of the two options, option B is the less onerous one, and less restrictive on the future growth of the Internet. It's not that hard to set up some aliases to be able to SSH quickly into the right hosts without having to manually type out longer paths.
I don't think fresh launches, even with bold new parameters, are always considered 'huge changes' in that sense. The fact that it could break local environment's was probably not even documented in those specific environments
There was a time when the creation of a new TLD was unthinkable, so it seemed safe to use a domain syntax internally that was never going to become a public TLD. Then the TLD money grab happened and what was assumed to be safely isolated wilderness was sold out from under everyone.
There never was such a time, and IANA never said "We won't make any more". TLDs have been continually created for at least the past two decades. Look up the history of e.g. .biz, .info, .museum, .aero, .mobi, .cat, .asia, all the various ccTLDs that are created as new countries form (.ss), etc. And none of that is even counting the new gTLD expansion round that kicked off in 2012.
Or I could just dig out my GPG key and sign a message.
So I respect your healthy skepticism, but I promise I'm really me, and not just someone pretending to be me! And I'll also note that, in all my time on HN, I've never once seen someone pretending to be anyone they weren't, at least not on seasoned accounts like mine. You can go back through my 5 years of comments on here, and every time I say I'm a specific person out there in the real world, it's always the same consistent person.
I actually confirmed your GitHub before I made the previous comment. It was just a frivolous reply that I found funny but HN didn't. Tough crowd. But I really, really appreciate your reply, thanks for taking your time to prove your identity. Hopefully I didn't waste too much of your Sunday morning.
My experience here is that humor (especially deadpan) tends to land very poorly on HN, both because people are never contextually expecting it and because they tend to downvote it when they do recognize it, as they dislike perceived low effort comments that they don't feel are meaningfully contributing to the discussion (i.e. adding noise, not signal). HN may as well put "Don't try to be funny" in the FAQ, because it almost never works out.
Mistake made and lesson learnt. After being introduced to the larger internet community via reddit, sometimes you slip up in more serious places like HN.
> If you give me your email I can send you a message from my _lastname_@ work address
Not disputing your account in any way, but please don't propagate the widespread myth that only people who own an email address can send from that email address. If you wanted to use your well-known email address as authentication, you'd need to offer to reply to a mail sent to that address.
You could check whether the email passed SPF, DKIM, and DMARC checks. Google has certainly implemented these for their SMTP servers. If all of the tests pass, you can be pretty certain that the sender is the owner of the address.
This is what I was getting at. It's definitely not possible to fully impersonate an email from an @google.com address. We've got all the security bells and whistles turned on.
Though, admittedly, it would be easier for a layman to verify ability to reply to an email than to verify all those features, so point taken.
> .dev domain (prior to actually publicly releasing it) in Chrome, breaking everybody's non https local .dev environments.
Yeah, except it broke more than that.
A lot of folks use/d ".dev" and and ".prod" as internal sub-domains in their actually-owned domain (dev.example.com, prod.example.net).
For convenience you could however use the resolve.conf's "search" option to simply things, so at the CLI one could type "ssh webserv01.dev" and the resolver would would then append the company's domain to get the FQDN for the query.
Except once Google made their changes "webserv01.dev" now could go out to the Internet—especially if you had it in a browser and it tried to be "clever".
This applies to every domain name though (at the second level too, not just the first). You'd have exactly the same problem if you were using a fake .com domain that someone then actually registered. The best practice has always been to use a real domain name that you own, and subdomains thereof. If you don't, your setup isn't working so much as it is not broken ... yet.
Say I own "throw0101a.com", and then use "dev.throw0101a.com" and "prod.throw0101a.com". (Or you own CydeWeys.com.)
Previously, when .dev and .prod were not TLDs, it was fairly safe to type "ssh websrv01.dev" and "ssh dbsrv02.prod" because if the queries leaked onto the Internet they'd fail.
Now, with the post-Google TLD changes, if you type one of those things, and the local DNS happens to not be configured properly (i.e., the resolve.conf 'search' options is not present), then strange things can happen.
Further, if you put "websrv01.dev" in a browser now, it may go off into the Internet and try to be clever about auto-complete instead of just doing a local query.
This applies to any TLD though, not just .dev and .prod, not just gTLDs generally, and not even just real TLDs (that's right, you could be using a fake TLD on an internal network somewhere and the "creation" of a domain on that fake TLD on that network could cause your previously working hostname shortcuts to start failing).
Relying on DNS search lists to find the right host is a bad security practice that has caused security incidents even outside the context of the creation of new TLDs. It's best to always use fully-qualified domain names. A lot of people responsible for implementing DNS search lists in the first place now regret having ever created them.
Which is one of the reasons I always insist our dev and prod teams use FQDNs (under domains the company owns) in all of their configurations.
In more dynamic environments, the config may have the domain as its own setting and each service as just the hostname, but the software must combine them before use, or better yet combine them in the config if variable expansion is possible in the config language they are using (ex: db_server="db-01.${domain}" with domain being defined near the top).
The only proper way to do internal DNS is to register a domain. Even a free one will do.
The more people follow decent practices at home, the fewer businesses will accidentally break because on of the admins thought it'd be alright because it works for them at home. If you set your DNS domain correctly you can also save yourself some typing effort because DNS will automatically append the network name (so you can http://test instead of http://test.internal or http://test.hamu.co). As an added bonus, you can get valid TLS certificates for your internal network devices without messing with a certificate authority of your own!
It does seem much, much more likely that .internal will be definitely reserved for this purpose than that it will ever be delegated as an actual real TLD. If you have to pick a fake TLD to use that isn't one of the 4 mentioned in RFC 2606 that has the best chance of actually being reserved for this purpose in the future, then .internal is it.
I do have a domain but I don't want to enter my private IP addresses on a public DNS. Hence I just have server.internal to point to the correct IP address on my local DNS server and have service1.server.com, service2.server.com, etc. all just CNAME to server.internal.
You don't need to disclose anything on the public DNS. Just register the domain, then use it internally.
For example, if you buy "example.com", just set your public DNS (assuming your registrar provides one) to resolv it to 127.0.0.1, then add your internal hostnames and IP addresses to your internal DNS. If you do it that way, "my-server.example.com" will simply fail to resolve unless you're on your internal network and you don't have to worry about any issues with using the reserved *.internal domain.
I need to have the public DNS anyways so I find this way to be easier. I'd have to use CNAMEs anyways since the .internal addresses have to resolve to different IP addresses depending on the context.
Why would your A records change depending on context? Do you have separate networks where "server.localdomain" has different IPs? The typical way to solve that is to use different sub-domains for each one.
For example if you have a home network and a testing network, you could have one on home.example.com and the other at lab.example.com, in which case your servers would be server.home.example.com and server.lab.example.com. If you use DHCP on those networks, you simply set the domain and search-domain options and you can just enter "server/" on the devices that moves between them.
You only need to register example.com with a registrar, then you can use whatever subdomains you want wherever you want.
> Why would your A records change depending on context?
Good grief, usually it's because of a hairpin nat. People do that to themselves. They damage their own L3 networking and then decide that they need to damage also their entire DNS as a workaround.
It's a regular mind virus, because it's easy to implement split-horizon DNS but enormously expensive to remove it. People get used to it on one company and go and spread it on another company.
Just do a snat+dnat. These networking boxes are so expensive because they are meant to handle it, so let them do their job already. Or go IPv6 and get rid of DNAT altogether.
You can continue using /etc/resolv.conf to specify the private IP addresses even for a website that does actually exist. The point of registering the domain is to ensure that your computer/software can't ever accidentally hit anything you didn't intentionally configure that is actually under the control of someone else. You don't actually need to use publicly visible DNS to configure your internal network just because you used a real domain name.
Once upon a time plenty of people thought it'd be cool to give their "internal" stuff names like wwwtest.int and database.int and so on.
Now, there has actually been an int TLD for a long time, but you probably don't visit sites in that TLD very often because it is for international organisations like the UN.
So if your configuration blocks all those actual sites well, too bad right?
However, back that long ago the trusted Public CAs were not actually forbidden from issuing certificates for names that don't belong to anybody on the public Internet. This was a bad idea, but it was not yet (at that time) forbidden. So you could pay Thawte a pile of money and get a certificate for "exchange2" your backup MS Exchange server or maybe "linux.build" your Linux build server.
And since people were using this for internal names, you'd get people asking their CA for certificates like "exchange2.int" - for the internal backups MS Exchange server right?
Obviously there can't be any effective way to demonstrate control over internal names, since you do not in fact have control over them, you've just hijacked them.
And so the end result is that there were actual publicly trusted CAs issuing certificates for names in a real public TLD without checks, because they assumed it was internal when it was actually not.
These days the CAs are required to issue only for names in the actual Internet DNS hierarchy (plus TOR) and only after seeing one of the accepted proofs of control nicknamed the Ten Blessed Methods.
Meanwhile: There is only one namespace, do not try to hijack little pieces for yourself that don't belong to you. If you want to reserve some names so that your DNS doesn't "break" then you can buy names like anybody else.
Note that `foo@bar.com` is frequently used to get around the email address requirement, so it is not really for testing. I wouldn't be surprised if prominent websites block any address from example.com or so.
But people who know and use the words foo and bar are almost exclusively developers, so they should know better eveb when typing a fake address to a random form. But it’s easy to type reflexively, without thinking, so I’m not surprised people do it.
I'll be honest, even I do this when it's a form that has an email requirement but that I don't otherwise care about at all. I'm not gonna sign myself up for spam. Sometimes example.com and whatever other testing TLD that comes to mind is blocked because they want something that could plausibly be a real email address, so I give them something that could plausibly be a real email address, like asfgjklahsfgjklh@asdfoghuasfga.com
That's why I own fakefakefake.email it gives me some minor pleasure to provide this email address to people who insist on having an email address for me.
Then you use one of their alternate domains. Or you point a domain to one of their alternates.
Check out the MX record for meowcats.fun, for example. It points to suremail.info, which points to mailinator. I have never had that domain rejected by a form.
I've been using a@b.com just because it's quick & easy to type. I just looked it up, and it seems all the single-letter .com domains are reserved, so it should be fine.
The fact that it is a valid address is a feature, as some sites that demand email addresses (so they can spam you) verify the address for correctness. People are intentionally choosing an address that appears valid, because it is valid. So, sorry to the guy who bought bar.com, but that's what he signed up for.
I used to work for a company named Foo Bar Solutions with the public website foobarsol.com, but that used foobar.com for the internal AD name for some reason (which they didn't actually own). That was ... interesting. Microsoft clearly did not do their due diligence in explaining that the domain configured in AD must absolutely, positively, 100% no bullshit, be a real domain name that you actually own and will never relinquish. I'd argue they should have gone so far as WHOISing the domain name in question and failing outright if it didn't exist, and displaying the Registrant/Organization information if it did and prompting "Is this you?" before continuing. Would have saved so many sysadmins so much grief over the years.
And to be clear, Foo Bar is a placeholder here, not the actual name.
I've had to clean a number of these up. The worst is when "foobar.com" answers with a wildcard record. Booting client computers using public DNS in that scenario is like being stuck in a tar pit. The poor helpless operating system tries and tries to reach servers to query for its AD site, find Domain Controllers, apply group policy, run scripts, etc.
Microsoft's official training curriculum for "MCP" and "MCSE" back in '99 was pretty clear about it (I was an instructor at a community college for a Microsoft certification program), but other Microsoft docs and especially third-party docs weren't as clear. Thr whole ".local" debocle with Windows Small Business Server lays at the feet of Microsoft, though.
That’s really cool. Contoso (also mentioned by someone else in this thread) and Fabrikam are the only ones I remembered off the top of my head, I had no idea they used that many.
In practice, is there any difference? As long as example.com is guaranteed to be reserved, I don't see any downside in using it.
Not using .test was a big problem for tools like Pow a while ago, but that's because they were using .dev, which had no official recognition as being reserved or special-purposed.
For e-mail addresses in particular, I could easily see a situation where your domain logic prevents you from using an invalid TLD (like .test), and it would be a shame to special-case something strictly for testing purposes.
Probably not, but it's always nice to follow intended designs, since others will do so as well and this allows systems to evolve in compatible ways. There may be no difference today, but there could be tomorrow.
These days invalid TLDs don't really exist. New ones are getting released all the time. The only problem you'd run into is if the system you're using is treating .test differently for some reason, but that's likely not the case, for obscurity reasons if nothing else.
RFC 6761 says that there is a difference when I actually resolve these names. The example.com, example.net, etc. will resolve normally to an existent IP. Moreover they resolve the same way on every DNS cache.
The xxx.test will resolve as non-existent by default, unless you configure your own DNS specifically for them.
Well, example.com resolves, . test doesn't. Depending on your use case, one or the other may be desirable, but if you're setting up a dev environment, .test is your answer.
I've used example.com in the test cases of my webscraper. When they changed the links on the page, the test cases were failing, and I complained to them, but they did not care
Not just domains but IP address ranges as well. Doesn't stop lots of people from doing things like using real US DOD ipv4 /8 sized ranges for things they shouldn't.
This also seems to be unknown even to some university professors, who I've seen set up lab exercises using actual CloudFlare ASNs and IPs on a simulator connected to the open Internet. Not exactly dangerous as it would obviously get filtered, but still really bad form.
That's the link that qualifies special behavior for anything ending in ".test", ".localhost", ".invalid", and the set of "example.???" domains.
Copy-pasting the RFC into a comment would be a bit spammy (it's three pages of hyper-specificity), so just go read that. It's quite accessible and the mechanics are useful to be aware of.
A bit off topic, but I lament that these reserved domains are becoming less and less useful for testing web applications. I don't think you can acquire regular SSL certificates for reserved TLDs like "test.", yet an increasing number of browser features only work in "Secure Contexts" (ie. HTTPS only).
Chrome treats "localhost." as a Secure Context by default, a nice convenience, but for the other reserved TLDs you have to either self-sign (a fairly complex and laborious process that doesn't necessarily work on locked down devices) or register a non-reserved domain with a regular SSL certificate that points to a test IP.
Yeah, because of SSL, you really do need to own at least one real domain name that you use solely for testing. You can hang a bunch of subdomains off it and run separate applications on each one, but you are gonna want a real domain name.
Fortunately domains are super cheap all things considered. A .dev domain (my preference, but admittedly I'm biased) is a buck a month. If you really want to penny-pinch there's much cheaper still.
For me the cost isn't the main concern, it's the name collisions. I can easily obtain a short, memorable, and descriptive domain in a reserved TLD, but I probably have to find some obscure *.dev domain name because I'm competing with the entire world. Not the end of the world, I know, but I think it kind of defeats the purpose of reserved TLDs.
You'd be surprised how many short, memorable domains are still available on new gTLDs. E.g. there are several times more possible 4 letter strings than total .dev registrations that even exist. And if you're willing to use numbers this gets higher, let alone 5 characters or more.
Yeah, but then you use that and some fool somewhere in the pipeline has decided it's an optimization to the spec to not deliver to those addresses and so you can't even test if your email sending is working.
This guy is really committed to the joke, seeing as how he could easily sell bar.com for millions of dollars.
Note also that there is a .bar gTLD, and there is a foo.bar domain as of 2014 (though it doesn't seem to be hosting any content). I run the .foo gTLD, and bar.foo is a real domain (though admittedly not as good as foo.bar). There is no .baz; next round maybe?
Does he ever get a good offer? It seems to be the case that all the biggest websites are not on "good names" but that they took meaningless words and make them good names.
hotels.com? uber.com? stamps.com? There's lots of really big businesses built around single word domains. Not every single one of them of course, but definitely enough for good domains to be immensely valuable.
They sold for quite a bit more than $1M, in fact. I would argue that the actual sales are realistic (in the sense that they reflect the actual price that people are willing to pay for them, i.e. their actual value), and it's merely your expectations that are unrealistic. You are significantly underestimating how valuable a good one-word keyword is in a single globally unified namespace used by billions of people daily. For companies worth many billions of dollars (most of the below), a few million on a killer domain name is nothing.
Here are some notable expensive domain name sales:
Uber.com 2% of the company's equity in 2010 (!!)
Sex.com $13M
Hotels.com $11M
Tesla.com $11M
Porn.com $9.5M
Fb.com $8.5M
An asking offer without a bid is just that - an offer. I can ask $1M for that red apple sitting on my desk. It's completely irrelevant.
PS. This behavior is quite prominent in real estate prices, they're so slow to fall down. For example the US real estate market bottomed only in 2011. You have to be very patient if you want to buy the dip in such "I will only sell for the right price!!" markets.
Real estate isn't a great example because the carrying costs of unsold property in the form of upkeep, property taxes, mortgage, insurance, etc., are non-trivial. In general it's not remotely reasonable to hold onto property for 20 years without selling it. By contrast, the holding cost on a domain name is effectively zero, so there's much less pressure to get rid of one quickly. Plus, seeing as how the prices have only continued climbing over time, people who held out earlier and then got a good deal more recently don't regret it.
If I had a good portfolio of potential $1M+ domain names I too would hold onto them and wait for the right buyer to come along. I'm in no desperate need of money right now.
You'd be surprised. Not going to name anyone in particular but I know of multiple major one word domains that have been bought for that order of magnitude (some more, some less). For certain companies, that's chump change compared to the additional traffic a good domain name will bring their business.
He says, that one of the domains he already sold, was "corp.com". A few comments above we can read, that it was up for auction, starting at 1.7million and Microsoft bought it in the last few weeks.
Yeah, it used to be a programming problem challenge website feeding into recruiting efforts. It was turned down though after living out its useful life and now it's just a redirect.
I was once reading some documentation (I think for GCP) and there was a weird artifact in the corner of the page, it moved a bit when I moused over it (??) and when I clicked it I got redirected to this site (using the .withgoogle.com TLD).
The element never reappeared again.
I always wondered why the element appeared on the page. Obviously because "some combination of factors" ultimately returned true - I mean specifically why :)
The new gTLDs have been great for random people like you and me just wanting a domain to actually use it, but terrible for domainers. I got a nice 4-letter domain (cyde.dev, which I haven't done anything with yet) that I never in a million years would've gotten on .com.
I looked at my tabs. There's a lot of .orgs and some random country domain, one git site hosted on .ht and lots of national news and other sites on my local ccTLD.
And mastodon instances are rather new, and exist on a massive amount of different tlds, of course on .social but also others. Probably the age of the service plays a big role! We'll see more TLD diversity as things get much older.
If Facebook was still The Facebook and for some reason hadn't thought to pick up facebook.com before now, the price on that would be only just bounded.
This reminds me of all the emails I received from roots due to putting an actual address of mine in the receiver field of a Postfix guide I wrote years ago. People will blindly copy paste whatever text you put in tutorials.
It's actually interesting how we didn't go there for the web, although it is likely that we will in the next decade or so.
By the time Tim built his toy hypermedia system ("the World Wide Web"), DNS had a sensible mechanism which could have been used to deploy it cleanly. But that wasn't done. So, first HTTP means "just look up the A record for the name and connect on TCP port 80" and then HTTPS meant "look up the A record, connect TCP 443". Today you also need to check AAAA (for IPv6) and you need to use ALPN to say what kind of HTTPS you speak, but otherwise things remain much the same.
However we have a pile of things we want to do in the near future that don't fit this model very well. HTTP/3 support (ie HTTP over QUIC) would ideally be discovered as early as possible for best performance, and Encrypted Client Hello (technology to a void revealing the names of sites you're visiting to an eavesdropper) really wants to fetch from DNS quite a lot of information too.
So there will likely be a new DNS record for basically "I want to connect over HTTPS to this DNS name" the same way the MX record means "I want to send email to this DNS name".
Ordinarily this would be a huge deployment nightmare, because we know crap DNS implementations can't do new DNS features. The specification for DNS is clear that if somebody asks "What's the WONKABAR for some.name you control?" and you've got no idea what a WONKABAR is you say there isn't one. But lots of bad software has been written that will break out, and either crash, silently drop the question or reply by reporting that there was a server error, none of which is correct.
However, fortunately the DNS privacy protocols give us a chance to reset expectations. Comcast's deal to get their DoH servers used in Chrome (for Comcast customers obviously) requires that they implement DNS properly, not half-arse it. So that's millions of customers brought on board by contractual obligation just for one example.
Note that odds are probably low today of anything listening on port 25 if no mx record is set up, I'm guessing (I'm guessing most people that setup an smtp server also sets up mx and a handful of records in dns).
I really like the idea of setting your MX record to 127.0.0.1. I am not sure what problems this causes (the author mentions people are angry about it) but I like it.
This makes me realize that DNS is slightly flawed in that you can prove "the owner of example.com wants to accept email at X.Y.Z.A" but not "the owner of X.Y.Z.A wants to accept email for example.com". (My experience using a managed load balancer on Amazon was that I got a ton of traffic for websites that weren't mine. Some DNS record must have been out there pointing towards our IP, which I guess is bound to happen when you only have 2^32 of them to share among all of humanity. Someone should do something about that...)
> I really like the idea of setting your MX record to 127.0.0.1. I am not sure what problems this causes (the author mentions people are angry about it) but I like it.
I imagine it could cause annoying loops with some mail server configurations.
RFC 7505[1] defines null MX records. This might be a good alternative, depending on what your intention is.
As an aside, I believe "foobar" is a sanitization of "fubar", which is an unofficial military acronym for "f*ucked up beyond all recognition". If you're interested in the history of fubar, a search of "history fubar" provides more history about "fubar", "snafu", and other colorful unofficial military acronyms.
I worked for Bill Blue at CTS net for about 6 weeks in '94. One of the machines I worked on was named "crash". I asked if that name was not tempting fate.
Bill said it was so that some uucp address would be
I just use no@thank.you for these silly sites which try to force sign-up, or variations therein, e.g. no.th@n.ks
length of access using this method varies but it's usually enough time to cache and consume their precious content
it's impossible to verify a valid email address without sending a verification link. one half measure is to ping for an MX entry on the user-provided domain, but even that isn't bulletproof
if they go to such unnecessary lengths to hide their content then it simply isn't worth viewing. scoff, close tab, forget
I own a curse word (Swedish) domain and I get quite a lot of similar emails. Nowhere close to thousands per day though. Still the level where I can answer the emails with a bad joke.
Lots of lucifer at helvetet.com etc :) I sometimes reply with an mp3 with a few seconds from Diamanda Galas' "The Litanies of Satan". https://www.youtube.com/watch?v=OBeTXiTZbCc
https://westinghouse.com/ seemed to have a test environment that sent me emails with invoices as well. I asked about delivery options :S
If CF is still using that abomination "hCaptcha", you can get an accessibility cookie for it which doesn't stop the captcha from appearing but does cause it to actually work when you click on the "I am a human" widget, unlike the normal broken flow
I had innumerable problems with images not loading (there weren't any errors on the console, the div was just blank) and it was always just pitiful slow to load the ones it did. I don't even bother interacting with the normal flow anymore, and just go get an accessibility cookie the first time anything challenges me
reCAPTCHA was a brief interruption, occasionally; contact with hcaptcha is "well, there goes 10 minutes of my life I'll never get back ... I should just close the tab cause I didn't want to know that bad anyway"
I recognize that's not an answer to what you can do about hcaptcha, but since hcaptcha's goal is to make the web harder to use, I guess from a certain perspective there's nothing wrong with it
the Foo has asked me to say "thanks" for this thread. you can read his sad story on www.bar.com. he used to answer email addressed to foo@bar.com but was completely overwhelmed by the mid-90's and has left his correspondence for me to tend to.
the New York Times included him in a story back in 2001...
I've read an interview with a guy who owns dupa@wp.pl. Dupa is a polish swear word, often used instead of foobar by polish programmers, causing lots of embarassing mistakes when it shows up on production. He apparently got a lot of stuff too.
I’ve always wondered if “foo” and “bar” was specific to a certain (now older) generation of programmers and would eventually fade into obscurity. Or has that bit of programmer culture been passed on to newer generations too?
Afaik 'foo' and 'bar' still tend to be a commonly used example variable/function/data structure name by older practitioners of C and C++ in academia so I do wonder if it will pass on...
A friend of mine in Greece owns the null.gr domain. He's gotten a ton of automated email from misconfigured systems over the years, some of them with serious security implications :)
This is actually a dark pattern on their part, surely designed to boost email signups. The sign up page has an email box and a continue button, but you can press continue without entering an email.
> If an empty list of MXs is returned, the address is treated as if it was associated with an implicit MX RR, with a preference of 0, pointing to that host.
That said, it doesn't look like there's an SMTP server running for bar.com.
It's a bit old and requires jquery, but it's fairly easy to gut the library and just use the email logic. It even uses your example of "gnail.com" in the README.
Though it is substantially easier to register a domain for fun than register a TLD for fun, even though the principle remains the same: pay some money and do some DNS configuring. It’s just that for TLDs the value of “some” changes.