Hacker News new | past | comments | ask | show | jobs | submit login
foo@bar.com (bar.com)
852 points by diggan on Sept 27, 2020 | hide | past | favorite | 281 comments



Something that I’m surprised a lot of devs don’t know; there are official domains you’re supposed to use for documentation, testing, etc. They are specifically reserved by IANA for these purposes. Originally I think it was just example.com, but they now have a list of all them: https://www.iana.org/domains/reserved


Indeed. I've owned `invaliddomain.com` for almost 20 years. You'll be surprised how many use it for testing. One morning I woke up to 30,000 e-mails from Sony Japan with PDFs attached of scanned hand-written part orders. Something similar with Boeing sending me backup notifications. I notified each of these companies about their configuration through their official channels, only to be told "no, it's your server doing this" then usually followed up with an e-mail a few weeks later along the lines of "sorry, our bad". So, if you're testing something and using a test domain, use the IANA reserved domains, please. Theses were the days when I was running my own servers. I don't see it as often now as my e-mail is now hosted.


.app, .dev, .prod, and .zip all had substantial volume of problematic traffic that was discovered during the Controlled Interruption period (which occurs prior to launch and consists of a wildcard DNS entry placed on the entire TLD). You would not believe some of the brokenness that was happening there. .zip may need some explanation -- apparently there are lots of library API calls out there that take a path string as input and try to load it as either a local or remote file. You can see where this is going.

https://www.icann.org/en/system/files/files/name-collision-f...


ICANN should never have assigned .zip, there's just too much potential for abuse, confusion giving away a common extension as a TLD.


There's a lot of overlap between file extensions and TLDs though. .py, .sh, and .app are some more examples (but a fully exhaustive list would be in the dozens if not hundreds). At some point you have to just treat them as the separate namespaces that they actually are (and not somehow try to block a TLD from being used as a file extension, or vice-versa).

Besides, accidentally resolving a file extension to a TLD is only one of many possible different serious errors that can result from exposing an API that can load files locally or remotely, and thus make network calls that you might not be expecting. Fundamentally you need to fix that API either way.


not to mention .com


Geez, that's by far the best example and it didn't even occur to me. And it's an executable file type. Good call.


I think it's okay. They are different namespaces and people should fix their bugs. It would be like skipping the street number "911 Foo St." because "911" is the emergency phone number and 911 Foo St. is just a regular house. I'm sorry if someone is confused but they shouldn't be confused. They're totally different things.

The biggest mistake we made with DNS was the "shortcut" of implicitly adding the root domain to random strings treated as domain names (turning "example.com." into "example.com"). The file "foo.zip" and the website "foo.zip" wouldn't even be ambiguous if we called the website "foo.zip.". "ndots" also causes operators of DNS servers a lot of pain -- some malfunctioning program tries to resolve "example.invalid" in a tight loop and it balloons to asking for "example.invalid.", "example.invalid.local.", "example.invalid.cluster.local.", "example.invalid.svc.cluster.local.", and then DNS blows up, breaking everything.


> It would be like skipping the street number "911 Foo St." because "911" is the emergency phone number and 911 Foo St. is just a regular house. I'm sorry if someone is confused but they shouldn't be confused. They're totally different things.

Floor 13.


We just never should have allowed filename extensions to have semantic power. Resource forks are far more elegant and you could do simple look ahead checks to verify types etc.


Back in '81 when MS-DOS came out and ascribed "magical power" fo files ending in ".com", DNS didn't exist (except possibly as experiments stuff between a handful of university or military mainframes). It was also an era where the vast majority of users of those .com files were doing so off 5 1/4" floppies (or possibly even 8" floppies), and the very idea of adding "resource forks" to your on-disk data would have been ridiculed as an outrageously profligate waste of extremely valuable disk storage resources.

It's easy to say "we should never have" in retrospect. But you're basically accusing programmers back in the late 70s of having insufficient foresight to see problems that would make their decisions seem bad almost half a decade later.

We also should never have let companies sell cigarettes. Or burn fossil fuels. Or start social networks.


The magical power of extensions actually predates MS-DOS -- both CP/M and TRSDOS did the same thing in the late 70s, and I'm pretty sure both of those were inspired by mainframe operating systems. (I was a TRS-80 kid, and didn't know until much later why our equivalent of batch files had the extension "JCL".)

Having said that, though, we could at least wish that some variant of the Mac's old idea of separate creator and document codes stored as metadata had caught on. Sure, it'd have been a few more bytes per directory entry (to be specific, five more bytes!), but it was a lot more flexible -- and if more operating systems had been built with that "document types are metadata" idea, that metadata could have been replaced with MIME types later on, like it was on BeOS.


> But you're basically accusing programmers back in the late 70s of having insufficient foresight to see problems that would make their decisions seem bad almost half a decade later.

The URL standard was published in 1994.

Sure, in a vacuum you could read view my comment as meaning in all time, but given that the parent comment is discussing ICANN domains and my comment was relevant to that, I think it’s a little uncharitable to do so.


But it made them so much money!!


ICANN should never have done gTLDs at all.


I've personally run into the .zip problem thanks to browser's omni address/search/chocolate bars. I intend to search for a zip file whose name I know but the browser "helpfully" realizes the term includes no spaces and ends in dotsomething and attempts to treat it as a URL.

A simple workaround is to add a preceding space or something like inurl: but that's isn't an automatic behavior so whoever owns mlpdwarfporn.zip is getting a lot of unintentional hits.


if we could go back I wonder if it would be better if we had required a leading dot in domain names ".google.com"


Or if browsers weren't trying to be too smart enough and use the same box for both searching and addresses. Trips me with .py files all time time.


Press Ctrl+K and browser will be forced to search.


After ".py", use " ?" (the space is important) to force the search in firefox.


Using a question make in front of the term is traditionally how some browsers trigger search. They usually have a jeybconbe to add it automatically.

For example, in firefox and chrome ctrl-l will clear the URL bar and put the cursor and focus there to take you to the location you enter, and ctrl-k will do similar but pre-fill the location bar with a preceding '?' so a search is triggered on the input.

These shortcuts have existed for quite a while. They used to just focus the respective separate input boxes, when it wasn't all done through one.


What's a "jeybconbe"? It's a googlewhack for this thread.


And six hours later, as I read this thread, nobody has registered the domain still! ;-)


My best guess is that it looks like a typo for "keycombo".


Yes, thanks. Switched to a new keyboard on my phone, and I'm getting some different typos now. Not necessarily more, I always seem to have a lot that slip through, just different...


Or just start the query with `?`

Typing `?bla.py` in the omnibar will perform a search for `bla.py` on both Firefox and Chrome


I think that ends up as user unfriendly as requiring the dot after the TLD. I don't read up on all the gTLDs so I didn't realize zip was one for the longest time. I think ICANN just went nuts with TLDs, especially ones like .app and .zip that have long-standing associations with ubiquitous file extensions. That combined with the "smart bar" just leads to trouble.


or go back to left to right reading.

http//com.google.www.

vs

http//com.google.www./file.zip


This is the real answer here. It's widely believed that domains working the opposite way of filesystem hierarchies and even URL paths was a huge design mistake. Think of how many billions of dollars have been lost over the years from account compromises resulting from people not correctly distinguishing yourbank.com/account/foobar from yourbank.com-account.info/foobar.


1. The reason for the change is generally thought to be email: you read least specific to most specific from left to right, like non-American dates, e.g firstname.lastname@group.department.university.edu

2. also I think ! was used rather than . for some networks at some times. Certainly I’ve heard this anecdote from some early users of the internet (or maybe other computer networks)

3. Traditionally the dot is the zero-level or extra-top level domain, a bit like / for root in the Unix file system. Indeed if you use dig you may be familiar that it writes domains like “www.google.com.” My web browser only partly seems to accept this format.


> 2. also I think ! was used rather than . for some networks at some times. Certainly I’ve heard this anecdote from some early users of the internet (or maybe other computer networks)

https://en.wikipedia.org/wiki/UUCP#Bang_path

! was used for manual routing from the source to the destination, when messages were copied from server to server via UUCP. People would write paths like ...!ucbvax!deptserver!myname , which means "you probably know how to get messages to ucbvax; from there, here's how to reach me".


And here's an example of an old business card using that format for an email address:

https://en.wikipedia.org/wiki/UUCP#/media/File:UUCP_Email_Ad...


...!mcsun!nuug!fics!el here :-). Of course, those were actual routes, rather than just reverse hierarchies.


Interesting, and makes sense, seeing as how the Web didn't yet exist when domain names were first invented, but email did. It does make sense that they were thus represented in a manner catering to that use case. In hindsight I guess we'd prefer if emails were backwards too, e.g. com.gmail@cydeweys


This is ironically what’s become popular for service-specific usernames (“on twitter @dan” and “on instagram @danny”). It’s arguably more useful than the email notation because prefixing with the @ is a clear sign that a username is coming next.


Only that the @ ("at") reads terribly than.

So com.gmail/cydeweys maybe?

Or it could be just an object in a global namespace (accesses through a protocol specific facade). So com.gmail.cydeweys


You could easily just use a different symbol, or pronounce it differently. We think it makes sense the way it is now, but that's only because we've all collectively gotten used to the meaning it has in email addresses. If it had had a different meaning all along then that would make sense to us.

com.gmail.cydeweys doesn't really work because you need some way to distinguish that as an email address and not just a subdomain on com.gmail. So it really does need a different delimiter, like how you need a different delimiter in URLs when it switches from the domain to the path (and then also to the querystring and fragment).


'@' was used to mean 'at' or 'each' for a long time before email addresses. Often in pricing. '12 @ $3' means twelve units priced at $3 each. It's referred to sometimes as 'commercial at', including its Unicode name.

https://en.wikipedia.org/wiki/At_sign

It was used as 'at' in computing in Algol 68 where it was a shorthand for a keyword 'at'. It has a different application as 'at' in Dyalog APL.


It wasn't remotely as familiar to a broad audience back then as it is now, though. Considering that people accept its non-idiomatic use for website user names even now (which is the same as what I'm proposing no less), I don't think this would be a problem.


Using your example with left-to-right domains and needing a delimiter, one could build a URL structure for mail that changes the protocol but not the path.

https://com.google. for the main page

https://com.google.mail for gmail

https://com.google.mail/localpart for one's own gmail account

mail://com.gmail/localpart to send mail to what is currently localpart@gmail.com

Alternately, we could continue to use '@' and put the localpart first. The URL method of specifying a username or username and password does this for HTTP/HTTPS.

http://username:password@com.site/login

And currently we use mailto:localpart@example.com but we could use mailsend:localpart@com.example instead.


I would prefer it to read more like UUCP routing. com.twitter@dan

The other nice thing about this system, is that you can drop the domain when the communication is internal.

And if you really wanted to, you could force the communication protocol xmpp//com.twitter@dan vs smtp//com.twitter@dan


Then of course the web would similarly use

http://homepage.html@org.example


The domain name should definitely still be first. And there are benefits to using a different syntax for domain names than for email addresses, so you can differentiate the two when they're presented out of context (e.g. on a business card or ad).


I was mostly just making a joke about the suggestion that email addresses should look like "com.gmail/cydeweys" from upthread...

But I could _kinda_ of make an argument for it. When I open my browser, I want to go to "the homepage of the company running New Your Times, so homepage@com.newyorktimes - or perhaps the menu from French Laundry - menu@com.frenchlaundry


> 3. Traditionally the dot is the zero-level or extra-top level domain, a bit like / for root in the Unix file system. Indeed if you use dig you may be familiar that it writes domains like “www.google.com.” My web browser only partly seems to accept this format.

The browser actually handles it just fine, but some webservers refuse to handle it, despite it being a spec. Most famously Traefik and Caddy as webservers refuse to support these, just because their devs don’t understand the spec and think they know better (what a surprise, both are written in go after all)

This actually causes quite a bit of trouble when you have internal systems resolving pretty much everything as local domain first, and some external webservers don’t support the spec. Most famously, caddyserver.com. itself is broken


Or just required the use of the true FQDN - "google.com."


that's what I meant actually—I just forgot it was trailing


My college's online system for everything is called edukacja.cl. A lot of people type that into the address bar, expecting to be redirected to the login page. Fortunately, the person who bought that domain didn't misuse it. I can imagine someone putting a fake login page there, and getting access to a lot of sensitive student details.


Ruby's `open` accepts urls. I can't say I've ever used this functionality.


Ask your red team about that ;)


PHP's file_get_contents supports it as well.


I don't know Ruby enough to comment on that but I would guess it requires a scheme as well.


I've only ever used it for that! For files there's `File.open`! ;)


Clojure a slurp too.. never been bitten, I thought it was useful.


> "no, it's your server doing this"

at that point i expected the story to go "and then they sued me for stealing their documents"


He orchestrated a man in the middle attack by being in the middle of the engineers and their incompetence.


I owned forexample.com for 15 years or so and saw all kinds of mail but the most persistent was a record company owner who, from time-to-time, wrote semi-deranged angry emails demanding that I turn the domain over to him. I always had grand plans for the domain but never acted and, a couple years ago, I forgot to renew and the domain is now in someone else's hands. I don't miss it, especially the record company guy.


I wish there were a site that all this stuff could be posted so we can all share in the fun. Always entertaining hearing about this stuff.


https://thedailywtf.com/ is pretty close

When I read reddit (before it jumped the shark imo) there was something similar in a non-tech way called /r/idontworkherelady


I've run out of funnies to find in the codebase I work on. This looks like a fresh supply of much much more. Much appreciated.


I remember reading - long time ago - a story of a developer who used http://xxx as a placeholder for unknown domains, until at some point the browsers started resolved single-word links into www.<word>.com... :-o


Back in the days before I'd ever heard of multicast DNS or zeroconf networking, I had a local dns server set up with all our subdomain.ourdomain.com duplicated as subdomain.ourdomain.local and pointing to our local dev/staging versions of our websites. It worked wonderfully, until I think MacOS 10.2 arrived (so, like 20 years ago almost) which had mDNS support for the first time and "broke" it all on me...

I switched to using subdomain.ourdomain.staging instead and got on with life. I wonder if anyone's gonna have to deal with the fallout of that decision when someone oneway pays ICANN enough money to own the .staging TLD?

(I wonder how much "interesting" stuff would land in your mail/web/ssh/whatever log files, if you registered .staging and .dev and just logged everything that came past (or intentionally/actively honey potted everything there?)


Use Lan. Reserved so no issues


I was about 5 seconds away from clicking this link on my work laptop until I realized what it would redirect to. For others like me, it's very NSFW


It doesn't seem that the browsers still to the expansion things; I've tried it, and am just getting the "We’re having trouble finding that site." error message, or similar. But for some time that expansion was real.


Haha. That’s actually surprising, I mean that one takes some work to even type. I’ve mentioned previously on HN that I own doesnthaveone.com, which is constantly bombarded with random crap. I wish I had some big public customer data to see what other fake ones show up.


On the other hand, you knew what you were getting into when you decided to be the owner of a meme domain! People should use properly reserved domains, but I can't really blame them for accidentally using meme domains.


I registered non-existent-domain.com many years ago when I saw it referenced in some article as a place-holder domain name.


I had my.homepage.com for a while back in the early 2000s, unfortunately I wasn't allowed to monetize it, but looking at the referral logs was always interesting.


Why would you tell them and bring an end to the lolz?


Because, technically, opening those emails when they contain confidential information could be construed as a violation of the CFAA (it’s very broad).


Indeed it's probably broad enough that you could likely find an ambulance chaser who'd go after people who _send_ you those emails "in excess of their authorized access" to your mail server.

You could weaponise this the same way companies use defensive patents... "Sure, I opened one of your emails, but you've connected to my mail server without authorisation <checks logs> 27,943 time so far this month. Go on, lawyer up. Bring it on!"


This.


Thank you, I was unaware of this. I found the relevant section in the doc that was linked from your original link:

2. TLDs for Testing, & Documentation Examples

To safely satisfy these needs, four domain names are reserved as listed and described below.

                   .test
                .example
                .invalid
              .localhost
* ".test" is recommended for use in testing of current or new DNS related code.

* ".example" is recommended for use in documentation or as examples.

* ".invalid" is intended for use in online construction of domain names that are sure to be invalid and which it is obvious at a glance are invalid.

* The ".localhost" TLD has traditionally been statically defined in host DNS implementations as having an A record pointing to the loop back IP address and is reserved for such use. Any other use would conflict with widely deployed code which assumes this use.

https://tools.ietf.org/html/rfc2606#section-2

3. Reserved Example Second Level Domain Names

* example.com

* example.net

* example.org

https://tools.ietf.org/html/rfc2606#section-3


I learned about .invalid last year and had an immediate use for it where we needed a syntactically valid email to match a schema but didn't want it to be deliverable.

I discovered quickly that some other systems wouldn't accept the placeholder emails such as notused@email.invalid. Too many systems try to be too smart about the syntax of emails (+ subaddressing is another minefield).

Had to go back to using something like notused@invalid.toplevel.com


> + subaddressing is another minefield

I'm still not sure if this is because the developers are incompetent and don't understand that they can just used an established standard instead of rolling their own janky parser, or if it's because they just don't want to let the user tell who had their databases leaked or sold their email to a spam list.

I used to think the former because there's so many different solutions to the latter that don't involve actively annoying the people you're trying to extract money from, but I'm starting to think that it's a little from column A and a little from column B.


A lot of people learned this the hard way when Google bought and later enabled permanent HSTS for the .dev domain (prior to actually publicly releasing it) in Chrome, breaking everybody's non https local .dev environments.

As mentioned above, it should have been .test


The HSTS preloading fortunately ended up being an unintentional additional type of Controlled Interruption period, which was a good thing in the end. It would've been a lot worse if, one day, your fake domain names are resolving locally, and then literally the next day it's now a real domain name that's resolving remotely, with who knows what result. This at least forced people to address it well in advance of domain names potentially resolving globally.

https://www.icann.org/resources/pages/name-collision-2013-12...

https://jdebp.eu/FGA/dns-use-domain-names-that-you-own.html


Sounds like it could be intentional.

— We bought .dev, now what we do with all those people in the wild misusing it?

— Well, most of them misuse it with our browser, let's break at least their hacks early and loudly.


It was unintentional.

Source: The horse's mouth. I'm the guy who came up with the idea of launching .dev and .app as HTTPS-only TLDs, and I'm the one who had them added to the HSTS preload list.


@Cydeweys, I respect this level of candidness


Out of interest, how did this happen without any investigation into possible consequences etc?

It’s a huge change that surely warranted special attention before making it happen?


It's not a huge change though. .dev was a new, never-launched TLD, so there were no existing real domain names to break with the addition of HSTS preloading. Established best practice for decades at that point was already to always use real domain names (or subdomains thereof) or specifically reserved test domains/TLDs (see RFC 2606, published in 1999) for testing/development/local networking purposes.

So yes, we didn't anticipate how many people weren't following the best practices, but that would have been hard to determine prior to doing the thing anyway. There were also lots of people who had the mindset of "We won't change anything until it stops working", so in some sense a lot of it was unavoidable. See e.g.: https://github.com/laravel/valet/issues/204 https://github.com/laravel/valet/issues/294 https://github.com/laravel/valet/issues/431 (note that Laravel users were responsible for a non-trivial fraction of the total problems experienced, and that we only discovered all this post-HSTS-preloading). The problem was repeatedly pointed out and the maintainers refused to fix it until it actually broke. So, inevitably, it broke, and then they fixed it.


Were you somehow unaware of the many, many, many people who used .dev in their local environments? You must've had some idea, since the initial plan was to use the .dev TLD for exactly that within Google.

I've always hated Google for egoistically claiming this tld, and ICANN for letting them.


Basecamp’s Pow project comes to mind.

It enables zero config local development configuration, whereby at the time it would make your locally running dev server available on .dev

So instead of having to spin up a local server and then visiting say 127.0.0.1:3000,

I could instead just visit myappname.dev and it would show me what would previously show on localhost or it would spin up a server first for that app then show it to me.

They switched to .test in response to google’s change.

Official site: https://pow.cx

Thread on change from .dev to .test https://github.com/basecamp/pow/issues/386


I just got an “untrusted connection” warning visiting pow.cx


Oh woops just habit to set https now. Page is from an earlier time when we didn’t mind static sites not being https.

Visit http://pow.cx


Yes. Not nearly as many people were using .dev in local environments as you seem to think. We didn't know anyone, and by the very nature of them being fake and locally-configured-only it's not something you can easily find out about. And no, our intention was not to use .dev for fake domain names.

Also, just because someone is using a fake domain doesn't preclude that from being created as a real domain name farther down the line. That's why you shouldn't use fake domain names. This problem has been known since at least the 90s and is not a good habit to get into. Them now being real makes them actually more useful (and not reliant on potentially unsynced local-only config).


My problem was that it also broke properly configured domains. We have machines in xxx.dev.example.com (for example) that I used to rely on the resolver searching for, so I could just type "xxx.dev" and it would then know to try it with example.com banged onto the end. Then everything in .dev started resolving so my abbreviations started resolving to other hosts.

I mean, I THINK this is "properly configured", but it also isn't a huge deal to avoid. Just was annoying when it started happening. Didn't FEEL like I was misconfigured. :-)


If your machines started exhibiting behavior you didn't expect or intend from a configuration you believe to be proper, then something has gone wrong. It may be the system, it may be the configuration, and it may be your understanding of either. Or any combination.

In this particular case it sounds like your resolver was set up to try an external resolution first and then append an internal domain if the external resolution failed. And as you say, this clearly worked just fine for a long, long time. Then it suddenly started failing one day, for reasons unrelated to anything you changed.

At this point, most people would find it reasonable to blame the external change for breaking their fully functional, correctly working, "properly configured" setup. Some, perhaps contrarian or perhaps more cautious, would note that the "properly configured" approach only worked so long as external systems played ball. I think it might be the case that you were bitten by this assumption that seemed safe at the time, leading to the awkward and uncomfortable conclusion that your systems were indeed misconfigured.


And also, from a larger standpoint, we have a choice of two mutually exclusive options here. It comes down to either (a) freezing the TLD hierarchy in place and never creating any new ones (e.g. not even for a new country, or .mars or whatever), or (b) not retaining indefinite backwards support for multi-label DNS search lists when they happen to collide with new TLDs.

I would argue that, of the two options, option B is the less onerous one, and less restrictive on the future growth of the Internet. It's not that hard to set up some aliases to be able to SSH quickly into the right hosts without having to manually type out longer paths.


This attitude is the reason I refuse to use Chrome and only recommend the alternatives.


I don't think fresh launches, even with bold new parameters, are always considered 'huge changes' in that sense. The fact that it could break local environment's was probably not even documented in those specific environments


There was a time when the creation of a new TLD was unthinkable, so it seemed safe to use a domain syntax internally that was never going to become a public TLD. Then the TLD money grab happened and what was assumed to be safely isolated wilderness was sold out from under everyone.


There never was such a time, and IANA never said "We won't make any more". TLDs have been continually created for at least the past two decades. Look up the history of e.g. .biz, .info, .museum, .aero, .mobi, .cat, .asia, all the various ccTLDs that are created as new countries form (.ss), etc. And none of that is even counting the new gTLD expansion round that kicked off in 2012.


How do we know you are the right horse? Sorry, just watched `/the social dilemma` and now I am skeptical of all information on the internet.


If you give me your email I can send you a message from my _lastname_@ work address, which you can then match up with published media I've done under my name, e.g. https://www.youtube.com/watch?v=kBkX30Cj7Bw and https://security.googleblog.com/2017/09/broadening-hsts-to-s...

Or you can look at the matching username on GitHub ( https://github.com/CydeWeys ) and see that I'm a member of the Google org and owner of the Nomulus repo ( https://github.com/google/nomulus/graphs/contributors ), the software running our TLD registry and which was announced here: https://opensource.googleblog.com/2016/10/introducing-nomulu...

Or I could just dig out my GPG key and sign a message.

So I respect your healthy skepticism, but I promise I'm really me, and not just someone pretending to be me! And I'll also note that, in all my time on HN, I've never once seen someone pretending to be anyone they weren't, at least not on seasoned accounts like mine. You can go back through my 5 years of comments on here, and every time I say I'm a specific person out there in the real world, it's always the same consistent person.


I actually confirmed your GitHub before I made the previous comment. It was just a frivolous reply that I found funny but HN didn't. Tough crowd. But I really, really appreciate your reply, thanks for taking your time to prove your identity. Hopefully I didn't waste too much of your Sunday morning.


My experience here is that humor (especially deadpan) tends to land very poorly on HN, both because people are never contextually expecting it and because they tend to downvote it when they do recognize it, as they dislike perceived low effort comments that they don't feel are meaningfully contributing to the discussion (i.e. adding noise, not signal). HN may as well put "Don't try to be funny" in the FAQ, because it almost never works out.


Mistake made and lesson learnt. After being introduced to the larger internet community via reddit, sometimes you slip up in more serious places like HN.


Yep, Hacker News is like Reddit, without the jokes and porn.


lol


> If you give me your email I can send you a message from my _lastname_@ work address

Not disputing your account in any way, but please don't propagate the widespread myth that only people who own an email address can send from that email address. If you wanted to use your well-known email address as authentication, you'd need to offer to reply to a mail sent to that address.


You could check whether the email passed SPF, DKIM, and DMARC checks. Google has certainly implemented these for their SMTP servers. If all of the tests pass, you can be pretty certain that the sender is the owner of the address.


This is what I was getting at. It's definitely not possible to fully impersonate an email from an @google.com address. We've got all the security bells and whistles turned on.

Though, admittedly, it would be easier for a layman to verify ability to reply to an email than to verify all those features, so point taken.


This is argument from authority, but an exquisite one.


> .dev domain (prior to actually publicly releasing it) in Chrome, breaking everybody's non https local .dev environments.

Yeah, except it broke more than that.

A lot of folks use/d ".dev" and and ".prod" as internal sub-domains in their actually-owned domain (dev.example.com, prod.example.net).

For convenience you could however use the resolve.conf's "search" option to simply things, so at the CLI one could type "ssh webserv01.dev" and the resolver would would then append the company's domain to get the FQDN for the query.

Except once Google made their changes "webserv01.dev" now could go out to the Internet—especially if you had it in a browser and it tried to be "clever".


This applies to every domain name though (at the second level too, not just the first). You'd have exactly the same problem if you were using a fake .com domain that someone then actually registered. The best practice has always been to use a real domain name that you own, and subdomains thereof. If you don't, your setup isn't working so much as it is not broken ... yet.

See https://jdebp.eu/FGA/dns-use-domain-names-that-you-own.html for more explanation.


> This applies to every domain name though

Say I own "throw0101a.com", and then use "dev.throw0101a.com" and "prod.throw0101a.com". (Or you own CydeWeys.com.)

Previously, when .dev and .prod were not TLDs, it was fairly safe to type "ssh websrv01.dev" and "ssh dbsrv02.prod" because if the queries leaked onto the Internet they'd fail.

Now, with the post-Google TLD changes, if you type one of those things, and the local DNS happens to not be configured properly (i.e., the resolve.conf 'search' options is not present), then strange things can happen.

Further, if you put "websrv01.dev" in a browser now, it may go off into the Internet and try to be clever about auto-complete instead of just doing a local query.


This applies to any TLD though, not just .dev and .prod, not just gTLDs generally, and not even just real TLDs (that's right, you could be using a fake TLD on an internal network somewhere and the "creation" of a domain on that fake TLD on that network could cause your previously working hostname shortcuts to start failing).

Relying on DNS search lists to find the right host is a bad security practice that has caused security incidents even outside the context of the creation of new TLDs. It's best to always use fully-qualified domain names. A lot of people responsible for implementing DNS search lists in the first place now regret having ever created them.

More info at https://www.icann.org/en/system/files/files/sac-064-en.pdf (particularly section 4.1.3 and the preceding logic leading up to it).


Which is one of the reasons I always insist our dev and prod teams use FQDNs (under domains the company owns) in all of their configurations.

In more dynamic environments, the config may have the domain as its own setting and each service as just the hostname, but the software must combine them before use, or better yet combine them in the config if variable expansion is possible in the config language they are using (ex: db_server="db-01.${domain}" with domain being defined near the top).


I own "dev.host" and we get a ton of interesting traffic.


Can't wait for .internal to be registered so that my internal DNS breaks.


The only proper way to do internal DNS is to register a domain. Even a free one will do.

The more people follow decent practices at home, the fewer businesses will accidentally break because on of the admins thought it'd be alright because it works for them at home. If you set your DNS domain correctly you can also save yourself some typing effort because DNS will automatically append the network name (so you can http://test instead of http://test.internal or http://test.hamu.co). As an added bonus, you can get valid TLS certificates for your internal network devices without messing with a certificate authority of your own!


It's terrible that the spec doesn't reserve something like .localdomain that is unregisterable so that DNS servers can use it for internal use.

.test doesn't quite cut it.


You could propose this to be implemented, surely?


It's already being proposed by one of my colleagues. See: https://tools.ietf.org/html/draft-wkumari-dnsop-internal-00

It does seem much, much more likely that .internal will be definitely reserved for this purpose than that it will ever be delegated as an actual real TLD. If you have to pick a fake TLD to use that isn't one of the 4 mentioned in RFC 2606 that has the best chance of actually being reserved for this purpose in the future, then .internal is it.


I do have a domain but I don't want to enter my private IP addresses on a public DNS. Hence I just have server.internal to point to the correct IP address on my local DNS server and have service1.server.com, service2.server.com, etc. all just CNAME to server.internal.


You don't need to disclose anything on the public DNS. Just register the domain, then use it internally.

For example, if you buy "example.com", just set your public DNS (assuming your registrar provides one) to resolv it to 127.0.0.1, then add your internal hostnames and IP addresses to your internal DNS. If you do it that way, "my-server.example.com" will simply fail to resolve unless you're on your internal network and you don't have to worry about any issues with using the reserved *.internal domain.


I need to have the public DNS anyways so I find this way to be easier. I'd have to use CNAMEs anyways since the .internal addresses have to resolve to different IP addresses depending on the context.


Why would your A records change depending on context? Do you have separate networks where "server.localdomain" has different IPs? The typical way to solve that is to use different sub-domains for each one.

For example if you have a home network and a testing network, you could have one on home.example.com and the other at lab.example.com, in which case your servers would be server.home.example.com and server.lab.example.com. If you use DHCP on those networks, you simply set the domain and search-domain options and you can just enter "server/" on the devices that moves between them.

You only need to register example.com with a registrar, then you can use whatever subdomains you want wherever you want.


> Why would your A records change depending on context?

Good grief, usually it's because of a hairpin nat. People do that to themselves. They damage their own L3 networking and then decide that they need to damage also their entire DNS as a workaround.

It's a regular mind virus, because it's easy to implement split-horizon DNS but enormously expensive to remove it. People get used to it on one company and go and spread it on another company.

Just do a snat+dnat. These networking boxes are so expensive because they are meant to handle it, so let them do their job already. Or go IPv6 and get rid of DNAT altogether.


You can continue using /etc/resolv.conf to specify the private IP addresses even for a website that does actually exist. The point of registering the domain is to ensure that your computer/software can't ever accidentally hit anything you didn't intentionally configure that is actually under the control of someone else. You don't actually need to use publicly visible DNS to configure your internal network just because you used a real domain name.


If you have a local DNS server, you can just have the service1.yourdomain.com only resolve there. Totally standard practice.


My local DNS server can't resolve certificate verifications so I need a public DNS server anyways.


> don't want to enter my private IP addresses on a public DNS

Why not?


Once upon a time plenty of people thought it'd be cool to give their "internal" stuff names like wwwtest.int and database.int and so on.

Now, there has actually been an int TLD for a long time, but you probably don't visit sites in that TLD very often because it is for international organisations like the UN.

So if your configuration blocks all those actual sites well, too bad right?

However, back that long ago the trusted Public CAs were not actually forbidden from issuing certificates for names that don't belong to anybody on the public Internet. This was a bad idea, but it was not yet (at that time) forbidden. So you could pay Thawte a pile of money and get a certificate for "exchange2" your backup MS Exchange server or maybe "linux.build" your Linux build server.

And since people were using this for internal names, you'd get people asking their CA for certificates like "exchange2.int" - for the internal backups MS Exchange server right?

Obviously there can't be any effective way to demonstrate control over internal names, since you do not in fact have control over them, you've just hijacked them.

And so the end result is that there were actual publicly trusted CAs issuing certificates for names in a real public TLD without checks, because they assumed it was internal when it was actually not.

These days the CAs are required to issue only for names in the actual Internet DNS hierarchy (plus TOR) and only after seeing one of the accepted proofs of control nicknamed the Ten Blessed Methods.

Meanwhile: There is only one namespace, do not try to hijack little pieces for yourself that don't belong to you. If you want to reserve some names so that your DNS doesn't "break" then you can buy names like anybody else.


should have used .local


Not really. .local is often tied to zeroconf/mDNS and doesn't work reliably over traditional DNS on all platforms.


mDNS is actually pretty nice on controlled networks, so if you want .local use mDNS.


Note that `foo@bar.com` is frequently used to get around the email address requirement, so it is not really for testing. I wouldn't be surprised if prominent websites block any address from example.com or so.


But people who know and use the words foo and bar are almost exclusively developers, so they should know better eveb when typing a fake address to a random form. But it’s easy to type reflexively, without thinking, so I’m not surprised people do it.


I'll be honest, even I do this when it's a form that has an email requirement but that I don't otherwise care about at all. I'm not gonna sign myself up for spam. Sometimes example.com and whatever other testing TLD that comes to mind is blocked because they want something that could plausibly be a real email address, so I give them something that could plausibly be a real email address, like asfgjklahsfgjklh@asdfoghuasfga.com


That's why I own fakefakefake.email it gives me some minor pleasure to provide this email address to people who insist on having an email address for me.


I use @mailinator.com for that purpose, it's oddly satisfying to put "lolno@mailinator.com" / etc in those places.


A lot of forms reject @mailinator.com and similar domains.


Then you use one of their alternate domains. Or you point a domain to one of their alternates.

Check out the MX record for meowcats.fun, for example. It points to suremail.info, which points to mailinator. I have never had that domain rejected by a form.

Shhh, don't tell anyone ;)


I've been using a@b.com just because it's quick & easy to type. I just looked it up, and it seems all the single-letter .com domains are reserved, so it should be fine.


Isn't x.com owned by paypal?


The fact that it is a valid address is a feature, as some sites that demand email addresses (so they can spam you) verify the address for correctness. People are intentionally choosing an address that appears valid, because it is valid. So, sorry to the guy who bought bar.com, but that's what he signed up for.


Correctness and existence are two different things. I doubt many sites explicitly exclude @example.com or @*.test in their email validator.


I usually use president@whitehouse.gov and the like...


I would steer well clear of .gov and .mil when providing fake anything.

Impersonating federal agents is shockingly illegal and the authorities who enforce this have absolutely no sense of humor.

The risk is low but why take it?


That risks getting you in trouble with Secret Service.


I usually use postmaster@theirdomain so they can spam themselves to an address they (in theory) cannot just ignore mail to.


Actually, for a while I used example.com a lot and I was surprised how often it worked just fine.


You also have Microsoft's list of domains / companies used in their documentation and examples: https://social.technet.microsoft.com/wiki/contents/articles/...


Didn't Microsoft for the longest time use a domain in their documentation that they didn't own and were basically forced to buy it now?


Yes, you're probably thinking of corp.com, which MS used as the default for AD setups. It went up for auction this spring and MS bought it:

https://krebsonsecurity.com/2020/04/microsoft-buys-corp-com-...


I used to work for a company named Foo Bar Solutions with the public website foobarsol.com, but that used foobar.com for the internal AD name for some reason (which they didn't actually own). That was ... interesting. Microsoft clearly did not do their due diligence in explaining that the domain configured in AD must absolutely, positively, 100% no bullshit, be a real domain name that you actually own and will never relinquish. I'd argue they should have gone so far as WHOISing the domain name in question and failing outright if it didn't exist, and displaying the Registrant/Organization information if it did and prompting "Is this you?" before continuing. Would have saved so many sysadmins so much grief over the years.

And to be clear, Foo Bar is a placeholder here, not the actual name.


I've had to clean a number of these up. The worst is when "foobar.com" answers with a wildcard record. Booting client computers using public DNS in that scenario is like being stuck in a tar pit. The poor helpless operating system tries and tries to reach servers to query for its AD site, find Domain Controllers, apply group policy, run scripts, etc.

Microsoft's official training curriculum for "MCP" and "MCSE" back in '99 was pretty clear about it (I was an instructor at a community college for a Microsoft certification program), but other Microsoft docs and especially third-party docs weren't as clear. Thr whole ".local" debocle with Windows Small Business Server lays at the feet of Microsoft, though.


That’s really cool. Contoso (also mentioned by someone else in this thread) and Fabrikam are the only ones I remembered off the top of my head, I had no idea they used that many.


Northwind and Windtip


I've typically used example.com for testing, but link says it's just for documentation. Sounds like .test is the sanctioned way.

https://en.m.wikipedia.org/wiki/.test


In practice, is there any difference? As long as example.com is guaranteed to be reserved, I don't see any downside in using it.

Not using .test was a big problem for tools like Pow a while ago, but that's because they were using .dev, which had no official recognition as being reserved or special-purposed.

For e-mail addresses in particular, I could easily see a situation where your domain logic prevents you from using an invalid TLD (like .test), and it would be a shame to special-case something strictly for testing purposes.


Probably not, but it's always nice to follow intended designs, since others will do so as well and this allows systems to evolve in compatible ways. There may be no difference today, but there could be tomorrow.

These days invalid TLDs don't really exist. New ones are getting released all the time. The only problem you'd run into is if the system you're using is treating .test differently for some reason, but that's likely not the case, for obscurity reasons if nothing else.


.invalid is guaranteed invalid by spec.


Touche


> In practice, is there any difference?

RFC 6761 says that there is a difference when I actually resolve these names. The example.com, example.net, etc. will resolve normally to an existent IP. Moreover they resolve the same way on every DNS cache.

The xxx.test will resolve as non-existent by default, unless you configure your own DNS specifically for them.


> unless you configure your own DNS specifically for them

or use horrible ISPs that redirect everything that isn't valid (and intercept DNS if you try to use an alternate DNS).


I stand corrected


Well, example.com resolves, . test doesn't. Depending on your use case, one or the other may be desirable, but if you're setting up a dev environment, .test is your answer.


I've used example.com in the test cases of my webscraper. When they changed the links on the page, the test cases were failing, and I complained to them, but they did not care


Not just domains but IP address ranges as well. Doesn't stop lots of people from doing things like using real US DOD ipv4 /8 sized ranges for things they shouldn't.


I and I’m guessing a few other HN readers worked for a place ~20 years ago that had a nationwide WAN and used a pirated /8 for every state.


Similarly, the IETF reserves 3 IPv4 /24s, an IPv6 /32 and 2 ASN ranges for that same purpose.

[1] https://tools.ietf.org/html/rfc5737 [2] https://tools.ietf.org/html/rfc3849 [3] https://tools.ietf.org/html/rfc5398

This also seems to be unknown even to some university professors, who I've seen set up lab exercises using actual CloudFlare ASNs and IPs on a simulator connected to the open Internet. Not exactly dangerous as it would obviously get filtered, but still really bad form.


That just provides a "this exists and is a thing" overview, with the bulk of the information hiding in https://tools.ietf.org/html/rfc6761.

That's the link that qualifies special behavior for anything ending in ".test", ".localhost", ".invalid", and the set of "example.???" domains.

Copy-pasting the RFC into a comment would be a bit spammy (it's three pages of hyper-specificity), so just go read that. It's quite accessible and the mechanics are useful to be aware of.


6tisch.arpa. [RFC-ietf-6tisch-minimal-security-15] 10.in-addr.arpa. [RFC6761] 16.172.in-addr.arpa. [RFC6761] 17.172.in-addr.arpa. [RFC6761] 18.172.in-addr.arpa. [RFC6761] 19.172.in-addr.arpa. [RFC6761] 20.172.in-addr.arpa. [RFC6761] 21.172.in-addr.arpa. [RFC6761] 22.172.in-addr.arpa. [RFC6761] 23.172.in-addr.arpa. [RFC6761] 24.172.in-addr.arpa. [RFC6761] 25.172.in-addr.arpa. [RFC6761] 26.172.in-addr.arpa. [RFC6761] 27.172.in-addr.arpa. [RFC6761] 28.172.in-addr.arpa. [RFC6761] 29.172.in-addr.arpa. [RFC6761] 30.172.in-addr.arpa. [RFC6761] 31.172.in-addr.arpa. [RFC6761] 168.192.in-addr.arpa. [RFC6761] 170.0.0.192.in-addr.arpa. [RFC8880] 171.0.0.192.in-addr.arpa. [RFC8880] 254.169.in-addr.arpa. [RFC6762] 8.e.f.ip6.arpa. [RFC6762] 9.e.f.ip6.arpa. [RFC6762] a.e.f.ip6.arpa. [RFC6762] b.e.f.ip6.arpa. [RFC6762] home.arpa. [RFC8375] example. [RFC6761] example.com. [RFC6761] example.net. [RFC6761] example.org. [RFC6761] invalid. [RFC6761] ipv4only.arpa. [RFC8880] local. [RFC6762] localhost. [RFC6761] onion. [RFC7686] test. [RFC6761]


Related - there are reserved IPv4 and IPv6 addresses for a similar purpose too: https://en.wikipedia.org/wiki/Reserved_IP_addresses . Not just for testing, but for writing documentation and similar too.


A bit off topic, but I lament that these reserved domains are becoming less and less useful for testing web applications. I don't think you can acquire regular SSL certificates for reserved TLDs like "test.", yet an increasing number of browser features only work in "Secure Contexts" (ie. HTTPS only).

Chrome treats "localhost." as a Secure Context by default, a nice convenience, but for the other reserved TLDs you have to either self-sign (a fairly complex and laborious process that doesn't necessarily work on locked down devices) or register a non-reserved domain with a regular SSL certificate that points to a test IP.


Yeah, because of SSL, you really do need to own at least one real domain name that you use solely for testing. You can hang a bunch of subdomains off it and run separate applications on each one, but you are gonna want a real domain name.

Fortunately domains are super cheap all things considered. A .dev domain (my preference, but admittedly I'm biased) is a buck a month. If you really want to penny-pinch there's much cheaper still.


For me the cost isn't the main concern, it's the name collisions. I can easily obtain a short, memorable, and descriptive domain in a reserved TLD, but I probably have to find some obscure *.dev domain name because I'm competing with the entire world. Not the end of the world, I know, but I think it kind of defeats the purpose of reserved TLDs.


You'd be surprised how many short, memorable domains are still available on new gTLDs. E.g. there are several times more possible 4 letter strings than total .dev registrations that even exist. And if you're willing to use numbers this gets higher, let alone 5 characters or more.


Yeah, but then you use that and some fool somewhere in the pipeline has decided it's an optimization to the spec to not deliver to those addresses and so you can't even test if your email sending is working.


I force myself to use @example.com even when the code I am writing does not send mails. This is the way I got used to it


yeah, "example.com" seems pretty sensible; it's what I've always used and recommended.


But I don’t see example.com listed there...


Try Ctrl+F. It's at the top.


This guy is really committed to the joke, seeing as how he could easily sell bar.com for millions of dollars.

Note also that there is a .bar gTLD, and there is a foo.bar domain as of 2014 (though it doesn't seem to be hosting any content). I run the .foo gTLD, and bar.foo is a real domain (though admittedly not as good as foo.bar). There is no .baz; next round maybe?


The domain is for sale for the right price: https://www.haven2.com/index.php/domains

Seems the asking price is at least 1m USD


Nice find. Guy has a serious portfolio of good .coms and knows what they're worth, and is holding out until he gets the right offer.


Does he ever get a good offer? It seems to be the case that all the biggest websites are not on "good names" but that they took meaningless words and make them good names.


hotels.com? uber.com? stamps.com? There's lots of really big businesses built around single word domains. Not every single one of them of course, but definitely enough for good domains to be immensely valuable.


Yes, but did they sell for $1 million? Or did these "premium" domains sell for something more realistic?


They sold for quite a bit more than $1M, in fact. I would argue that the actual sales are realistic (in the sense that they reflect the actual price that people are willing to pay for them, i.e. their actual value), and it's merely your expectations that are unrealistic. You are significantly underestimating how valuable a good one-word keyword is in a single globally unified namespace used by billions of people daily. For companies worth many billions of dollars (most of the below), a few million on a killer domain name is nothing.

Here are some notable expensive domain name sales:

    Uber.com     2% of the company's equity in 2010 (!!)
    Sex.com      $13M
    Hotels.com   $11M
    Tesla.com    $11M
    Porn.com     $9.5M
    Fb.com       $8.5M
https://en.wikipedia.org/wiki/List_of_most_expensive_domain_...


Whoever owned Uber.com was a smart business person and/or a tough negotiator. That’s brilliant. Thanks for the examples.

Note it also lists LasVegas.com for $90 million (in instalments) from 2005.

https://www.forbes.com/sites/forbestechcouncil/2018/08/07/ex...


An asking offer without a bid is just that - an offer. I can ask $1M for that red apple sitting on my desk. It's completely irrelevant.

PS. This behavior is quite prominent in real estate prices, they're so slow to fall down. For example the US real estate market bottomed only in 2011. You have to be very patient if you want to buy the dip in such "I will only sell for the right price!!" markets.


Real estate isn't a great example because the carrying costs of unsold property in the form of upkeep, property taxes, mortgage, insurance, etc., are non-trivial. In general it's not remotely reasonable to hold onto property for 20 years without selling it. By contrast, the holding cost on a domain name is effectively zero, so there's much less pressure to get rid of one quickly. Plus, seeing as how the prices have only continued climbing over time, people who held out earlier and then got a good deal more recently don't regret it.

If I had a good portfolio of potential $1M+ domain names I too would hold onto them and wait for the right buyer to come along. I'm in no desperate need of money right now.


And yet people do it in real estate.


You'd be surprised. Not going to name anyone in particular but I know of multiple major one word domains that have been bought for that order of magnitude (some more, some less). For certain companies, that's chump change compared to the additional traffic a good domain name will bring their business.


He says, that one of the domains he already sold, was "corp.com". A few comments above we can read, that it was up for auction, starting at 1.7million and Microsoft bought it in the last few weeks.


In fact, he quite recently sold corp.com for a healthy sum: https://krebsonsecurity.com/2020/04/microsoft-buys-corp-com-...


bar.foo redirects to careers.google.com. Nice joke. ;)


Yeah, it used to be a programming problem challenge website feeding into recruiting efforts. It was turned down though after living out its useful life and now it's just a redirect.


I believe the actual foobar challenge site is still running.


You got a link? Because I'm pretty sure it was running on the bar.foo domain.



Thanks. bar.foo should really be redirecting there ... I'll look into it.


I was once reading some documentation (I think for GCP) and there was a weird artifact in the corner of the page, it moved a bit when I moused over it (??) and when I clicked it I got redirected to this site (using the .withgoogle.com TLD).

The element never reappeared again.

I always wondered why the element appeared on the page. Obviously because "some combination of factors" ultimately returned true - I mean specifically why :)


I got something like that once but my ad blocker apparently screwed up the functionality and I never saw one again.


Haha, I have seen it three times but the opening of the black box is so slow I have always clicked something by then.


How does one get to run a TLD?



form a company with enough money behind it, google "donuts llc"

then pay a lot of money to ICANN


I don’t think anyone is paying millions of dollars for domains any more. Maybe low six figures.


Domain names are going for more money than ever before. The record for most expensive sale (publicly known anyway) was hit just last year. 30 million dollars for voice.com: https://domainnamewire.com/2019/06/20/yes-voice-com-is-the-m...


Only if you happen to have one someone really really wants. You can find 3 and 4 letter .coms publicly listed for six figures.


it's counterintuitive, with the flora of TLDs


The new gTLDs have been great for random people like you and me just wanting a domain to actually use it, but terrible for domainers. I got a nice 4-letter domain (cyde.dev, which I haven't done anything with yet) that I never in a million years would've gotten on .com.


A lot of people assume .com for whatever reason


Every single tab I have open is on .com right now. Other tlds seem to only be used by blogs or small single page sites like .io games.


I looked at my tabs. There's a lot of .orgs and some random country domain, one git site hosted on .ht and lots of national news and other sites on my local ccTLD.

And mastodon instances are rather new, and exist on a massive amount of different tlds, of course on .social but also others. Probably the age of the service plays a big role! We'll see more TLD diversity as things get much older.


This is a fun experiment. What I came up with:

com org uk gov blog us nyc io to edu

(I do have a lot of tabs.)


If Facebook was still The Facebook and for some reason hadn't thought to pick up facebook.com before now, the price on that would be only just bounded.


This reminds me of all the emails I received from roots due to putting an actual address of mine in the receiver field of a Postfix guide I wrote years ago. People will blindly copy paste whatever text you put in tutorials.


this makes me wonder about what volume of mail an MX for contoso.com would receive, if microsoft didn't own the domain.

https://www.google.com/search?client=ubuntu&hs=ujt&channel=f...


contoso.com does have MX records.

    $ dig +short -t mx contoso.com
    10 contoso-com.mail.protection.outlook.com.


You don't need MX records, email will go to the A record if no MX is defined. Presuming its listening for email of course.


It's actually interesting how we didn't go there for the web, although it is likely that we will in the next decade or so.

By the time Tim built his toy hypermedia system ("the World Wide Web"), DNS had a sensible mechanism which could have been used to deploy it cleanly. But that wasn't done. So, first HTTP means "just look up the A record for the name and connect on TCP port 80" and then HTTPS meant "look up the A record, connect TCP 443". Today you also need to check AAAA (for IPv6) and you need to use ALPN to say what kind of HTTPS you speak, but otherwise things remain much the same.

However we have a pile of things we want to do in the near future that don't fit this model very well. HTTP/3 support (ie HTTP over QUIC) would ideally be discovered as early as possible for best performance, and Encrypted Client Hello (technology to a void revealing the names of sites you're visiting to an eavesdropper) really wants to fetch from DNS quite a lot of information too.

So there will likely be a new DNS record for basically "I want to connect over HTTPS to this DNS name" the same way the MX record means "I want to send email to this DNS name".

Ordinarily this would be a huge deployment nightmare, because we know crap DNS implementations can't do new DNS features. The specification for DNS is clear that if somebody asks "What's the WONKABAR for some.name you control?" and you've got no idea what a WONKABAR is you say there isn't one. But lots of bad software has been written that will break out, and either crash, silently drop the question or reply by reporting that there was a server error, none of which is correct.

However, fortunately the DNS privacy protocols give us a chance to reset expectations. Comcast's deal to get their DoH servers used in Chrome (for Comcast customers obviously) requires that they implement DNS properly, not half-arse it. So that's millions of customers brought on board by contractual obligation just for one example.


Hmm, that's interesting... Source?



Do MTAs actually do it though?


Yes? http://postfix.1071664.n5.nabble.com/MX-lookup-fallback-to-A...

Note that odds are probably low today of anything listening on port 25 if no mx record is set up, I'm guessing (I'm guessing most people that setup an smtp server also sets up mx and a handful of records in dns).


Yes but Microsoft doesn't disclose how much they receive and discard.


As does 'fabrikam.com', but interestingly doesn't seem to be on outlook.com....


Similar is asdf@asdf.com. See their comment on that http://asdf.com/asdfemail.html


Their actual email being semicolon.jkl made my day.


I really like the idea of setting your MX record to 127.0.0.1. I am not sure what problems this causes (the author mentions people are angry about it) but I like it.

This makes me realize that DNS is slightly flawed in that you can prove "the owner of example.com wants to accept email at X.Y.Z.A" but not "the owner of X.Y.Z.A wants to accept email for example.com". (My experience using a managed load balancer on Amazon was that I got a ton of traffic for websites that weren't mine. Some DNS record must have been out there pointing towards our IP, which I guess is bound to happen when you only have 2^32 of them to share among all of humanity. Someone should do something about that...)


> I really like the idea of setting your MX record to 127.0.0.1. I am not sure what problems this causes (the author mentions people are angry about it) but I like it.

I imagine it could cause annoying loops with some mail server configurations.

RFC 7505[1] defines null MX records. This might be a good alternative, depending on what your intention is.

[1] https://tools.ietf.org/html/rfc7505


As an aside, I believe "foobar" is a sanitization of "fubar", which is an unofficial military acronym for "f*ucked up beyond all recognition". If you're interested in the history of fubar, a search of "history fubar" provides more history about "fubar", "snafu", and other colorful unofficial military acronyms.


The reality behind “foo” and “foobar” is somewhat more complex:

http://www.catb.org/~esr/jargon/html/F/foo.html


I bought a four character domain at auction once that was previously owned by a bank.

The emails I would get...

Let’s just say I turned off wildcard receipt of email after a week to limit my liability.


I worked for Bill Blue at CTS net for about 6 weeks in '94. One of the machines I worked on was named "crash". I asked if that name was not tempting fate.

Bill said it was so that some uucp address would be

   crash!boom
(read "crash bang boom")


If you, like me, don’t get the bar jokes, it’s because you have to click the headline link to read the whole joke.


I just use no@thank.you for these silly sites which try to force sign-up, or variations therein, e.g. no.th@n.ks

length of access using this method varies but it's usually enough time to cache and consume their precious content

it's impossible to verify a valid email address without sending a verification link. one half measure is to ping for an MX entry on the user-provided domain, but even that isn't bulletproof

if they go to such unnecessary lengths to hide their content then it simply isn't worth viewing. scoff, close tab, forget


You should use https://10minutemail.com/ for those obnoxious sites


would not load with tracker blocking enabled


I use nothanks@mailinator.com and email verification works for most part.


I own a curse word (Swedish) domain and I get quite a lot of similar emails. Nowhere close to thousands per day though. Still the level where I can answer the emails with a bad joke.


Would you mind telling which word? Fellow Swede wondering.


Hej! It's helvetet.com

Lots of lucifer at helvetet.com etc :) I sometimes reply with an mp3 with a few seconds from Diamanda Galas' "The Litanies of Satan". https://www.youtube.com/watch?v=OBeTXiTZbCc

https://westinghouse.com/ seemed to have a test environment that sent me emails with invoices as well. I asked about delivery options :S


The character’s image is quite frightening.

The Foo reminds me of a whole bunch of British children’s TV shows with characters that gave me many nightmares. Some sort of body horror, I believe.


It's behind cloudflare and I was met with a captcha AGAIN. Sigh.


If CF is still using that abomination "hCaptcha", you can get an accessibility cookie for it which doesn't stop the captcha from appearing but does cause it to actually work when you click on the "I am a human" widget, unlike the normal broken flow


Hi, founder of hCaptcha here. Happy to learn more about your problems with hCaptcha. I'd like to see what we can do to make it better for you.


I had innumerable problems with images not loading (there weren't any errors on the console, the div was just blank) and it was always just pitiful slow to load the ones it did. I don't even bother interacting with the normal flow anymore, and just go get an accessibility cookie the first time anything challenges me

reCAPTCHA was a brief interruption, occasionally; contact with hcaptcha is "well, there goes 10 minutes of my life I'll never get back ... I should just close the tab cause I didn't want to know that bad anyway"

I recognize that's not an answer to what you can do about hcaptcha, but since hcaptcha's goal is to make the web harder to use, I guess from a certain perspective there's nothing wrong with it


Are there any captchas that are not PITA?


the Foo has asked me to say "thanks" for this thread. you can read his sad story on www.bar.com. he used to answer email addressed to foo@bar.com but was completely overwhelmed by the mid-90's and has left his correspondence for me to tend to.

the New York Times included him in a story back in 2001...

https://www.nytimes.com/2001/05/03/technology/fleeing-spam-s...

by the way, i also was the caretaker of Microsofts corp.com domain for quite a while. they wisely decided i wasn't the best person for the job.

https://krebsonsecurity.com/2020/02/dangerous-domain-corp-co...


I've read an interview with a guy who owns dupa@wp.pl. Dupa is a polish swear word, often used instead of foobar by polish programmers, causing lots of embarassing mistakes when it shows up on production. He apparently got a lot of stuff too.


I’ve always wondered if “foo” and “bar” was specific to a certain (now older) generation of programmers and would eventually fade into obscurity. Or has that bit of programmer culture been passed on to newer generations too?


Afaik 'foo' and 'bar' still tend to be a commonly used example variable/function/data structure name by older practitioners of C and C++ in academia so I do wonder if it will pass on...


In addition to the IANA reserved list, don't forget ICANN controversially reserved .corp, .home and .mail.

https://www.theregister.com/2018/02/12/icann_corp_home_mail_... https://www.icann.org/en/system/files/files/sac-062-en.pdf


A friend of mine in Greece owns the null.gr domain. He's gotten a ton of automated email from misconfigured systems over the years, some of them with serious security implications :)


I used this as my email address just yesterday to avoid getting on yet another list.

The shortest bar joke is missing here (or I missed it) - Man walks into a bar. ... Ouch!


This is also the email I signed up to reddit with since they started requiring emails.


This is actually a dark pattern on their part, surely designed to boost email signups. The sign up page has an email box and a continue button, but you can press continue without entering an email.


It used to be that way but now you can't skip it at all.


Very interesting to read about the history and etymology of the term "Foobar"

https://en.wikipedia.org/wiki/Foobar


Mr Foo, if your reading this. I thank you for your service to mankind.


What I’m curious to learn is which mail server he uses to handle the influx of emails to foo@bar.com…


Looks like at the moment he simply lets them drop:

  $ dig -t MX bar.com +short
  $


You don't need an MX record to receive mail:

https://tools.ietf.org/html/rfc5321#section-5.1

> If an empty list of MXs is returned, the address is treated as if it was associated with an implicit MX RR, with a preference of 0, pointing to that host.

That said, it doesn't look like there's an SMTP server running for bar.com.


foo@bar.io is also quite commonly used for testing.

It's always interesting to get those test accounts for new services and try them out early.

Whats not so fun is "testing" all the draft newsletters people send to foo@bar.io.


I'm guilty of this, foo@bar.com is my fallback if the form doesn't accept x@y.z


I usually use test@example.com. It's supposed to be RFC-sanctioned, right?


you could just use our /dev/null approach at https://forwardemail.net with "!" prefixed wildcard alias for this haha


I have been guilty of using bob@bob.com for all manner of input testing


I imagine a close typo of a popular domain would also receive a ton of email.


One of the problems we had at my last company were invalid emails. We had so many users enter gnail.com or hptmail.com


I use this library for web pages that ask for an email: https://github.com/mailcheck/mailcheck

It's a bit old and requires jquery, but it's fairly easy to gut the library and just use the email logic. It even uses your example of "gnail.com" in the README.


Sweet! I'm no longer working on that project but if I ever need user email input, I'll remember this (someone must have ported it to react I'm sure)



I use lol@lol.com

Or if I need to see the email sent to me, I’d use something like tempmail


The diagram is truly disgusting, it gives me the creeps


mailinator.com seems to have made a successful business from public email.

This domain would be the perfect basis for such a business.


I always used bob@bob.bob. It's not a real TLD (although at the rate of new TLDs, it might be soon) but it still validated on most websites.


Someone whimsical will grab it, and not be happy with the ground that you've laid. Consider that dot@dotat.at exists.

* https://news.ycombinator.com/item?id=21369013


Though it is substantially easier to register a domain for fun than register a TLD for fun, even though the principle remains the same: pay some money and do some DNS configuring. It’s just that for TLDs the value of “some” changes.


lol


Pls make a post about this, show us their repos!


That wordpress default favicon makes me actually really sad in this context.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: