Sometimes vhosts are convenient, sometimes they're even mandatory. For example, with XMPP servers, multi-user chat and any components must live on a subdomain. So if your main server is running on example.com then the MUC server is, say, conference.example.com and component "foo" is foo.example.com. No way around it short of hacking the source (and, if I'm not mistaken, violating standards.)
This is just one situation where I can see this come in really handy during development.
> For example, with XMPP servers, multi-user chat and any components must live on a subdomain.
This is only necessary if you want users outside your domain to access your component. While you probably want to do so for MUC, you might not necessarily want to bother for your user directory or gateways. I've run many servers over the years and long since stopped creating a host/subdomain for each component.
Interesting, this must be a shortcoming of OpenFire then. With OpenFire I haven't found a way around having the MUC and extension subdomains accessible via DNS, regardless of whether or not requests are coming from the same domain or not. Is this not necessary with other XMPP servers? Which ones are you using, if I may ask?
It is indeed a shortcoming of OpenFire; one that won't be fixed [1].
As far as the XMPP protocol is concerned, the concept of sub-domains doesn't matter. It's useful for human users when configuring servers though.
Prosody for example allows running a multi-user chat service on example.com. And there's an undocumented feature which let's you have user@example.com be a user, and room@example.com be a chatroom.
Depends on what it is you're testing. You could be testing a vhost setup, in which case different ports or paths wouldn't suffice for testing. More specifically you could be an application's behavior when given different vhosts. Make sense?
This is the best way to get a "natural" development environment (i.e. as close to production as possible). Also, it's a pain in the ass having to start and stop various apps and remember ports, which is what Pow is great for in the first place.
Actually you can test two at a time by using an alias in /etc/hosts.
In /etc/hosts:
1.2.3.4 site1.com site2.com
Then you have to edit /etc/hosts to try two more sites.
The idea that the user cannot access /etc/hosts on the iPhone is reason enough to jailbreak it. Denying access to /etc/hosts means the iPhone cannot connect to the any website that requires a hostname, even when the iPhone can connect to the internet, unless the iPhone can access some DNS server (which of course Apple probably wants to control). That is a ridiculous limitation. The internet nor the web requires DNS to work, but the iPhone requires DNS to access the web.[1]
1. assuming the website is using virtual hosts or otherwise requires a hostname
I have dealt with weird edge cases in frameworks/proxies/whatever where sometimes the hostname turned into an IP (or 10.0.0.3 turned into 127.0.0.1). You generally don't want this to happen in production, and performing all your testing with names is a good way to observe it not happening.
I could see how it'd help if you are debugging a service that parses a subdomain out of the main URL for a particular purpose (think, for example, a SaS that provides a domain like 'yourcompany.mysasservice.com').
I have a dd-wrt router with DNSmasq functioning as the DNS server for local hosts. DNSmasq resolves external domains using Google DNS (8.8.8.8/8.8.4.4). With this setup, domain names like 192.168.X.X.xip.io and 127.X.X.X.xip.io won't resolve, and I believe there is something wrong with my DNSmasq setup. Anyone else ran into similar issues?
(Update) Problem solved by myself. The DNSmasq config has stop-dns-rebind option enabled, which filters out DNS results in private IP ranges from upstream servers for security reasons. DNSmasq doc has the following part:
-stop-dns-rebind
Reject (and log) addresses from upstream nameservers which are in the private IP ranges. This blocks an attack where a browser behind a firewall is used to probe machines on the local network.
In case you run into this issue, just comment out this option in dnsmasq.conf and restart dnsmasq.
If you're creating an application where connecting on the "root domain" matters it can be problematic. For example, imagine you were creating some URL rewrites using apache's mod_rewrite and they worked for http://some.domain.com/rewrite-goes-here/ you would have to do a bunch of extra work (or an extra set of rules even) to make that work also for 10.0.1.1/my_app_without_vhost/rewrite-goes-here/
When you're testing on your LAN using a PC/mac or whatever you can do a local DNS modification on the machine (eg. /etc/hosts) but when you're testing from an iPad or some other device this is either impossible or prohibitively difficult.
The other option is to setup a DNS server on your LAN which is a headache all it's own - this is a very simple and elegant way of circumventing these issues. Awesome stuff.
You really should develop your apps so that they are path agnostic. And the mod_rewrite rules can be fixed with a simple RewriteBase declaration (RewriteBase /subdirectory).
I've never found this a major problem that requires a DNS server to fix.
Does that actually work? On my machine pow has ipfw configured so that it only forwards requests to 127.0.0.1 not to 10.0.1.whatever so project.10.0.0.5.xip.io fails.
localhost.microsoft.com and localhost.yahoo.com also used to redirect to 127.0.0.1. And, of course, this wreaked utter havoc with cookies (since cookies are accessible across the entire domain). Zalewski discusses in TTW (The Tangled Web).
To expand on this, this problem can usually be solved by editing /etc/hosts. But you can't do that on some platforms such as iOS.
By the way, a neat trick is to assign an alias to your network interface in order to avoid the trouble of DHCP giving you a different IP address each time you connect. For example, on Mac OS X:
$ sudo ifconfig en1 alias 10.99.99.99 netmask 255.0.0.0
This address will only be reachable from hosts that have a route to it, which can be achieved for example by also giving them an alias on the same subnet. Still, comes in handy at times.
(Obviously you want to be sure you're using a vacant address.)
(And of course, when you have control over the DHCP server there are more elegant ways of achieving this, such as binding your MAC to a static IP address.)
What is the difference between an alias and a static IP address?
Not sure what you mean. If you're referring to the bit about binding MAC to an address, I should probably have said "fixed IP" rather than "static IP", sorry about that.
Also don't most routers attempt to give IPs back to the MAC that last had them?
I'm hopping between different networks (with different DHCP servers) quite a lot, maybe it's less useful when you're always on the same network.
Couldn't you accomplish this with djbdns' dnsrewrite or pdns_recursor's lua scripting?
Why anyone would want to write DNS server (=something that needs to be very fast) in Javascript is beyond my comprehension. The ASCII art is probably better work than the DNS server.
You guys are missing the point. It's intended to be used with Pow. This way you don't have to bother with manually starting & stopping servers or remembering ports or whatever. This domain allows you to have access your sites in the same way from any device in your network. Not just your dev machine. Handy! And it works. What's not to like then?
I run a simple djbdns setup locally with a caching resolver that passes specific domains to my dns server proper and the rest up the chain. Took about seven minutes to configure properly. This seems overly complex.
Of course reverse dns doesn't work :-) I suppose it kinda sorta could if you tracked where a request came from and what IP you sent it and if you got a reverse lookup you could undo that, but still it is clever!
Reverse DNS is controlled by the company that owns the actual IP address. There's no way for a random website to change responses for it (unless they own the IP range, or were delegated control)
Sort of. Which is to say it is when you don't lie, but you can lie if you know what you want.
When you reverse map an IP you look up b4.b3.b2.b1.in-addr.arpa. where b1 - b4 are bytes 1 through 4 (in reverse order) of the IP address. So 10.1.2.3 becomes 3.2.1.10.in-addr.arpa. The interesting bit is you send this to some dns resolver, typically in the 'generic' world your machine got the address of a resolver (and maybe a backup) from the DHCP server that gave it the IP address. When that dns server sees this request what it is supposed to do is to either tell you to 'go fish' and here is the IP of a server than can help, or 'recursively resolve' by forwarding on your request. Now if you run a vanilla BIND or djbdns setup you will get short circuited by it recognizing a 'private' address and not resolving it, if it did try the root servers tell you to go away as well. But if you recognized it as a private address and sent it back to xip.io DNS servers on a lark, they could "pretend" to be authorative for the domain and return you a cname record that pointed back to your fake name.
I admit it is a hack on top of another hack but as long as we're writing custom DNS servers why not go all in? :-)
Yes and no. There is nothing [1] preventing any DNS server from responding authoritatively to a request that it is presented with, except a moral correctness to the protocol.
[1] If you ever wondered how openDNS or your ISP sends you spammy web pages when you try to resolve something that doesn't exist, or how the hotel hijacks your browser into giving you a login page, this is it. You look for google.com it notes you haven't logged in and returns the address for its paywall as the answer.
Oh except if you are running dnssec in which case it is a lot harder to lie about what you are authoritative for. But on my dns servers at home they all think they are authoritative for 10.in-addr.arpa. so that they will answer queries for that network.
Buy (or dig out of your closet) an old Wifi router and install Tomato on it. Its web interface lets you edit its hosts file which is then active for all the devices connected to it.
Except many of us don't have Tomato/DD-WRT capable devices, not to mention I can't be bothered setting up an additional router to my current two, flashing and securing it and then switching my devices over to it just to test something when this solution does essentially the same thing.
Because it's a PITA to show other people how to make entries to their hosts file if you want for someone else to look at your project. I'm not telling you anything you don't already know, but this is why I would use this service over just making entries to my own hosts file.
Isn't the idea of IPv6 that we have plenty of addresses. Thus it wouldn't (read: shouldn't) be hard to get another address bound to your virtual host.
In fact doesn't the whole idea of binding multiple sub-domains to the same IP address (on the same port) kind of go by the wayside when it's free/cheap/not-going-to-blow-up-the-internet to get another IP?
yes you could indeed throw another address at your virtual host. but the website/tool in question won't work with IPv6 addresses as they are not valid in a URL is the point i was trying to make :)
Trust them with what? All you'd be telling 37signals with this is your development server's internal IP address and the names of any subdomains you might be using. No actual site traffic will go to them.
I've identified several technical problems with this domain, and this isn't an example of how to properly operate DNS. 37signals is setting an absurdly low TTL on these records (10 minutes; the answers never change, I absolutely do not understand the logic behind this TTL), which means every 10 minutes you're re-resolving a local address, through a CNAME (so two DNS round trips, and in my case this resolution took between 115ms and 230ms, not small change):
[~]$ dig foo.169.254.84.1.xip.io
foo.169.254.84.1.xip.io. 600 IN CNAME foo.daze1.xip.io.
foo.daze1.xip.io. 600 IN A 169.254.84.1
Concerningly, ns-1.xip.io is also broken; it does not serve NS records for its own zone, instead relying upon the SOA record and the upstream glue, which I'm shocked works:
[~]$ dig +short NS xip.io
[~]$
The nameserver delegation from nic.io is also broken:
xip.io. 86400 IN NS ns-1.xip.io.
xip.io. 86400 IN NS ns6.gandi.net.
;; Received 86 bytes from 2001:678:5::1#53(b.nic.io) in 60 ms
Oh, well that's interesting, Gandi is a backup for their custom daemon, eh? So did they implement AXFR, IXFR, and notify and such to Gandi? Well, let's ask Gandi:
Oh, guess not. The long and short of this is for DNS purposes, a custom daemon is almost never the answer. This could have been accomplished with BIND fairly easily, and the zone would be functional as well.
It's a cool idea, but there are some other problems too; which I just want to list to help the developers and am not trying to rain on a parade.
As in the parent comment, a CNAME is returned for arbitrary names;
% dig foo.192.0.2.1.xip.io
foo.192.0.2.1.xip.io. 600 IN CNAME foo.a2eo0.xip.io.
foo.a2eo0.xip.io. 600 IN A 192.0.2.1
but only if the request is of type A. Requests of other types return invalid NXDOMAIN responses - invalid because they contain no SOA in the authoritative section. CNAMEs are supposed to be returned for all records of any type for a given name, not doing so is dangerous as it can poison caches. Not returning the CNAME even for a query of type "CNAME" is particularly harmful.
Responding with no name would be bad on its own, but saying that no name exists is clearly wrong and can be used to poison caches (the NXDOMAIN is cacheable). Note that most browsers and clients will now perform an AAAA lookup prior to the A lookup - poisoning their own cache if they happen to have a copy of the SOA for xip.io in cache (the SOA record hints to the negative cache lifetime).
It's not clear that using an intermittent CNAME does anything useful - why not just return an A record, with a billion second TTL value. As-is, it merely adds a round-trip (the CNAME and A are not returned in one pass by ns-1.xip.io).
Additionally, ns-1.xip.io does not mark the "authoritative answer" bit in any responses - which will cause issues with some resolvers.
But, still a neat idea. Question for the developers;
It's clear that the intermediate CNAME represents an encoding of the IP address, e.g.;
foo.192.0.2.1.xip.io. 600 IN CNAME foo.a2eo0.xip.io.
here "a2eo0" is an encoding of 192.0.2.1 , but then;
foo.192.0.2.2.xip.io. 600 IN CNAME foo.k201s.xip.io.
are you using some kind of cipher?
PS. Everybody please use 192.0.2.0/24 for IP addresses in examples and documentation, and 2001:db8::/16 for IPv6. See RFC3330/5735 and RFC3849. It's good karma ;-)
Thanks! it reads like a 36-ary encoding of an IP address in host byte order, rather than network byte order, which is why it seems to jump around so much.
Interestingly, it encodes 0.0.0.0.xip.io as 0.xip.io , but then refuses to answer for 0.xip.io. Why isn't obvious to me from reading the code, perhaps some kind of overflow condition is triggered by the right shift.
Hold on - who cares? This isn't meant for use in production right? Or am I missing something... from what I can tell the purpose here is that I can setup a domain that will resolve to an address on my LAN without having to modify, say, /etc/hosts on my android device (which I wouldn't even know how to do) or setup a DNS server on my LAN (which is of course possible but a lot more long winded than the solution proposed here).
I can't see how it matters whether or not there are problems with this from the perspective of being a "correct" DNS server so long as it works for it's intended purpose (testing things on your local network from a bunch of different devices).
Edge cases and niggling "works ok for me" problems really do matter for something that's intended to be used with testing.
Otherwise it's easy to rat-hole for a long time trying to determine why your test isn't working, when it turns out it was a problem between your DNS resolver and an upstream domain.
Problems between DNS resolvers and DNS authoritative servers are classically intermittent; they usually depend on the ordering of a chain of steps to occur. For example, I might get a resolution failure for a xip.io record if one of the following sequences occurs;
Client asks resolver asks for AAAA - gets NXDOMAIN from ns-1.xip.io, caches it
Client asks resolver for A - responds with NXDOMAIN
but if the queries happen in the reverse order, things are fine.
or, another example;
Client asks an AD DNS server to perform resolution, server chokes on lack of the AA bit in authoritative answer.
And so on.
But then in either case if a tester fires up nslookup or dig, everything works on the command line, and so they may spend quite a while trying to figure out why my library routine for connecting to my service isn't working.
What I took away from this, though, is that they just want to be able to load a web application they're developing on their iPad/iPhone or otherwise "restrictive" device that doesn't allow you to easily make local DNS modifications (such as /etc/hosts files).
I honestly don't understand what you've written above (although I re-read it a couple of times, I guess I'm just not knowledgeable enough about DNS for it to make sense) but can you see those issues impacting the ability of someone to load an application on their iPad in order to test it out?
I guess the problem might arise that people start to use this "not as originally intended" and get into all sorts of strife but for the particular scenario they were originally intending it for it seems perfectly adequate, no?
The issues reported so far could absolutely cause errors across any device, including an iPad. The problems that the DNS setup will cause affect resolvers - which can be a combination of software in your browser, your c library, local caching daemon, on your cable/dsl modem, in your ISP, and a public provider.
Many crufty resolvers - on things like wifi routers in particular - don't deal well with the lack of an AA bit, or a REFUSED answer. So a tester could easily end up with "works for me" and "not for me" reports that are really just down to the particulars of their network and resolver software, whether they have IPv6 enabled, and so on.
Edited to add: Again, I don't mean to rain on the developers parade. It's a great idea.
Writing DNS implementations is hard, and requires a certain kind of technical archeology to get to grips with the detail. DNS is a tricky protocol, chaotically and ambiguously documented. I've helped write 3 different ones - and I still get things wrong. And that said; anyone interested in writing hardcore DNS implementations that have to operate on the scale of microseconds per query should drop me a line.
Actually, a low TTL is ideal for testing purposes. While you're correct that a query will never give a different answer, a low TTL ensures that the name won't linger in any resolver's cache for very long, which makes it less easy to discover. This also makes the arbitrary string chosen as a subdomain particularly ephemeral, which is important when testing name-based virtual hosts. Why leave a testing domain stuck indefinitely in the cache of a resolver I don't control? I'd rather have it disappear when I'm not using it.
It depends on what you're working on. If you're developing a SaaS application where an individual instance should be providing group features based on the hostname, suddenly host names becomes a development detail. Though I agree that in most web applications this isn't the case.
I'm sorry, I edited my comment (the broken zone is more troubling to me than the impetus for using it) and made yours look out-of-place. You responded to something accurate the first go, and I agree with you.
I find it interesting that we are required to do a second round-trip for the CNAME when it is available in the same bailiwick and could be sent as part of either the ADDITIONAL SECTION or as part of the ANSWER SECTION. Would you happen to know what the RFC has to say on this? I understand that CNAME's are not allowed to exist with other records, but returning it in the ADDITIONAL SECTION shouldn't be a cause for concern.
The reason why it may not be serving NS records for itself is because looking at what is available on Github the server is started on port 5300, so I am assuming that there is some sort of DNS resolver/cache sitting in front of it that may be stripping them out. Same thing with it not responding with "Authoritative answer" bit set...
Although that is simply speculation, maybe they did put the node service directly on the internet.
Once a CNAME exists for a name, no record of any other type may exist for that same name (it's an override for all types).
But for a query like this, a server is allowed to return both a CNAME and its relevant target(s) ... as long as they are within-bailiwick. It can go right into the answer section, e.g.;
% dig example.allcosts.net @ns-22.awsdns-02.com.
...
;; ANSWER SECTION:
example.allcosts.net. 300 IN CNAME at.allcosts.net.
at.allcosts.net. 3613 IN A 85.91.5.16
this is permitted because the original type of the query was "A", so we can include it as an answer, and it will avoid a round-trip on behalf of the recursor. That's all regular RFC1035 behaviour.
It's more common to use the additional section to include details about the target(s) of MX, SRV and NS records. That's more of a "I know you asked for an MX record, but you're going to need this A / AAAA record too pretty soon, so here it is in the additional section" kind-of thing. The additional sections in the responses to the following queries should be illustrative;
Something I forgot to mention; Being that DNS is the chaotically documented protocol that it is, I'm glad they launched early with a minimally viable product. It's the best way to get feedback like this for free! I think the real scope for real-world error is something like 1 in 100 users experiencing a problem as-is. Most resolvers are hyper tolerant of any amount of DNS crud, because they've been beaten on so much by poor implementations over the years. But the 1% of the time it breaks will cause you hours of pain in debugging.
In fact, returning both the CNAME and the A in the initial response is required. Returning just the CNAME and setting NOERROR tells a recursor 'the target name exists but I do not have an A record for it'. Luckily, all recursors I am aware of are stubborn and will then ask for the A anyway.
This is a good example of where things get tricky in DNS. A resolver could never really infer non-existence of the A record from mere non-presence in an answer like that.
Although RFC1034 outlines that a server typically would do that, it also says that it shouldn't include data that it's not authoritative for.
So a conflict arises when you CNAME to a sub-delegated child zone. E.g.
foo.example.com IN CNAME baz.example.com
That response may come from a server that's authoritative for example.com - and so "baz.example.com" is technically in-bailiwick from the point of view of a resolver who has made only this query.
However baz.example.com may itself be delegated to other nameservers, and so is "really" out of bailiwick. But the response won't signal this to resolvers at this stage (though in theory could via the additional section).
The simplest reason why resolvers ignore it though is that there's no SOA in the response from which to derive the negative caching time - so it wouldn't know how long to cache that non-existence - and almost all resolvers are caches.
Even if it is required, some authoritative DNS server implementations don't, so far I have found BIND that came with FreeBSD 8.0 doesn't, nor does tinydns.
So recursors would have to account for possibly broken implementations and try the query anyway.
Ah, here is where implementation becomes important... not all of the authoritative DNS servers I have tested actually have this behaviour. So far I have found that BIND and tinydns don't send the A record even-though it is in bailiwick for the CNAME.
Hopefully they will solve this problem at some point, but if you want an alternative fast, couldn't you consider using services like dyndns, noip, etc?
(The author of the software wrote a comment here: "So you're just here to shit on things?", which he has since deleted.)
I genuinely and honestly cannot log into your Gandi account and fix your nameserver delegation, so that means I'm just here to shit on things? That's a logical leap for you? You are delegating xip.io to a nameserver that is refusing queries for your zone; that's seriously broken and can result in resolution failures, making your clever hack worthless.
I don't know why I bother providing feedback, since people from your school of thought (I'm looking at the 37signals community as a whole, here, which you're being a shining steward of) just get defensive and take your software being broken personally. You wrote a poor DNS server. Read the spec, study BIND's or NSD's source to understand the years of work that went into this before you, and understand the problems I've pointed out. I just get annoyed when people flagrantly misimplement DNS, because that starts trends, like Heroku suggesting for a long time that you use a CNAME for @ (don't do that).
> You wrote a poor DNS server. Read the spec, study BIND's or NSD's source to understand the years of work that went into this before you, and understand the problems I've pointed out.
Why? xip.io exists and works. If he had to read hundreds of pages of technical specs or thousands of lines of source to implement it, it wouldn't exist.
Feel free to make your own spec-compliant or better-working version though; Sam's done the same for RVM, and I'm sure he wouldn't have any problem with better software existing.
sscheper, I love your average on HN (-.2). In response to jsprinkles: I do believe this was just a hack and not exactly intended for public consumption but someone decided "hey that's pretty cool let's chuck it on the web". Which has the obvious results of the opinions of hundreds :P
This was an announcement; "someone" in this case is a co-worker of the author. The author of xip.io (sstephenson) came on thread to discuss this and a related announcement about Pow.
That's funny. I would interpet it the other way around. It doesn't say much for SV if they have to make money off of redundant little .js hacks like this. They have nothing better? That's the problem with SV. Lots of stupid money, conniving VC and no standard for what constitutes an actual business. They can take anything and spin it into a "company" just to reach a "helluva exit". What a joke. It is all going to implode.
Honest question.