Hacker News new | past | comments | ask | show | jobs | submit login

I use it as a captcha.

It's great for keeping crawler bots out, and easy enough for humans to get past.

Once a user logs in, I set a cookie, and the user is not prompted for the auth again.

The beautiful thing about this scheme is that the cookie is always sent, so I can create a rule which bypasses auth when the cookie is present.

Basic Auth is one of the most supported features of HTTP, supported even by Mosaic. There's one Chrome release, I think 65.x, which screws it up when used together with gzip and requires a page reload after authenticating, but that's the only exception I know.




> There's one Chrome release, I think 65.x, which screws it up when used together with gzip and requires a page reload after authenticating

Fortunately, that Chrome version is dead in the water (unlike Chrome 49, which is the last version for XP and Vista)


> unlike Chrome 49, which is the last version for XP and Vista

Oh God.


Agreed. My sympathies that the GP even needs to know this.


No need, I just angrily redirect users to FF52 (Firefox tends to be the last major browser maintained for older systems anyways). Plus, requiring a minimum of TLS 1.2 with specific PFS ciphers is now the requirement, nuking even the Windows Embedded POS version of Windows XP for Chrome (Firefox brings its own cryptographic libraries).


Thanks for the tip about Chrome 49. I'll add it to my list of browsers to test with more frequently.

65.x happens to be the last version added to Ubuntu 13.x, which is what one of my devices came installed with and I'm not motivated to change.


Why would you stick with Ubuntu 13.x? It's not even a LTS version


It's what came with the machine and works fine for my purposes.


This is my preferred use of basic auth: just set the message to “enter anything for username and password” and accept anything. Search crawlers won’t be able to index the site, which protects me from haters finding my content, and RSS feed urls can just hardcore some u/p without consequence. It’s all the upsides of the modern web without any of the harmful behaviors enabled by global search engines.


>Search crawlers won’t be able to index the site, which protects me from haters finding my content, and RSS feed urls can just hardcore some u/p without consequence.

Sadly, I've seen at least one exception to this rule. Somehow, a search engine crawler wised up to my admin/admin captcha, and I had to change it.


Yeah, one exception will always exist somewhere, but I'm not much worried about the exceptions.


It's an older computer, and that's what came pre-installed on it. I'm not interested in experimenting, just leave it with what works.

When browsing a handful of trusted sites from behind a NAT on a secure network, I don't think it's that much of a security risk.


> The beautiful thing about this scheme is that the cookie is always sent, so I can create a rule which bypasses auth when the cookie is present.

You don't even need the basic auth for that.

Years ago I needed to expose my pfsense WebGUI on the default HTTPS, but I didn't want it to be so obvious, so I made a couple of HAProxy rules, which allowed me to open https://pfsense.tld/open-sesame to set a cookie, after which I could open the default https://pfsense.tld/ just fine and see the WebGUI. Without the cookie there was just 404 for everyone.

It wasn't the best realisation (and some parts of webgui didn't like it) but it worked and allowed me to access it even on the smartphone.


Embracing security through obscurity like that is also how I decided to help protect my password manager, Vaultwarden. It's open to the internet on 80/443, but its URL is `subdomain.domain.tld/some-secret-path/`. It's dead simple, but indeed no unwanted visitors even see that site. Of course, even if they did, the regular login prompt with MFA appears.


I'm using something similar. Even if people tried to brute force their way to the hidden door, these attempts would probably be blocked by Cloudflare.


Have you checked if the secret path leaks in referrers?


It doesn't seem so. Clicking a random link from inside the Vaultwarden webpage (which is never used anyway, in favor of the Bitwarden browser plugins) and following the requests in Firefox's Browser Console, no request has a Referer HTTP request header. Vaultwarden does not send the Referer header cross-origin:

https://github.com/dani-garcia/vaultwarden/blob/920371929bc8...

My home server uses Caddy and its JSON logs. These are incredibly easy to parse of course. Through the dynamic DNS solution I use (Docker image qmcgaw/ddns-updater), I have a list of all of my own IP addresses. Add to that others like my work's IPv4 block, and I get a collection of 'known', i.e. harmless IP addresses. Filtering these in a little pandas-based Python tool leaves all requests reaching the secret endpoint. Logs reach back around a week. Another tool 'enriches' each log entry with IP lookup info from ipinfo.io. Their free API tier is enough for my uses. That way, I can filter for request origin countries, hostnames, etc.

The entire pipeline is automated, but triggered manually on-demand. So far, no hits from unknown IPs to the endpoint!


That's a good way to investigate; I would've pointed to a server I control and check the logs.


Kind of like “port knocking” but for HTTP.


I like the idea of a multistep version, load domain.com/string1, domain.com/string2, domain.com/string3 in order and within a certain timeframe.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: