HTTP Basic Auth could be so much better with a little help from browsers. If it was a bit better, most websites wouldn't need to implement login pages over and over again. Plus it would be more secure since the popup is in its own security context.
* Add a button to log out. Logout never really worked across browsers with basic auth.
* Allow to inject a logo or a tiny bit of customization for branding. The default popup looks too ugly.
* Improve the Digest auth to modern crypto standards. Stop passing plain passwords over the wire.
That's it. It's not a massive amount of work and it could enable to simplify a lot of the Internet.
If you are using HTTPS, you are equally as good as any other login form.
Some have suggested using JavaScript to encrypt passwords before send - but in my opinion, this is generally stupid because it breaks support on browsers without JavaScript, and this doesn't protect you from the server at all because a hacker could just change the JavaScript to send plaintext copies somewhere. You are reliant on the server being a source of truth either way.
No, it would be much better to use a zero knowledge proof (typically called a PAKE — password authenticated key agreement) to demonstrate that the user knows their password without sending that password over the channel.
Sending the password over HTTPS doesn’t expose the password to passive observers, but it does unnecessarily expose the password to the server.
> "but it does unnecessarily expose the password to the server"
Yes - it does - but I am having a hard time thinking that this actually matters in the real world.
I can understand why not sending the password to the server, would be theoretically more secure. However in practice, what hacking attempts does this actually prevent?
If I was a hacker, I can add JavaScript to send plaintext somewhere. If I was a hacker, I could change the login form to stop hashing them client-side and instead send them over to the server for hashing and inspection.
It's theoretically more secure... but against what? Accidental logging? I mean, I guess, but then just set up your logs correctly instead of breaking login for anyone without JavaScript.
In my opinion, client-side hashing is security theater. It sounds impressive, but it doesn't really stop any real-world attacks, and the attacks it does prevent can be prevented using simpler methods. Ironically, the other methods (i.e. making sure your logs are clean) are so much simpler to implement it's probably more secure than trying to build a secure client-hashing implementation in the first place.
Yes, it happens all the time that passwords get logged.
> I mean, I guess, but then just set up your logs correctly instead of breaking login for anyone without JavaScript.
I have no idea what "set up your logs correctly" is supposed to mean but clearly no one's doing it. What is a "clean log" ?
> If I was a hacker, I can add JavaScript to send plaintext somewhere. If I was a hacker, I could change the login form to stop hashing them client-side and instead send them over to the server for hashing and inspection.
This assumes you have full control over the client page. But that's not necessarily (or often? most people don't serve JS from the same code that serves their auth API) the case.
1. The JS could be loaded from a CDN, not the same service that has access to the password. You may have absolutely no control over the JS on the page.
2. Every point between the browser and the password DB is a point where the password is in cleartext. So a compromise of any of those, including any logging paths, is a compromise of the password.
3. I don't think anyone cares about breaking login for users who don't use JS, nor should they.
What's more, ZKP means that if your password database is owned the impact is far less. If you're doing things right for password storage on top of ZKP you can practically make your password db public.
Even if we're talking about a basic client side hashing approach you're significantly improving security, but to be clear, the parent poster is talking about ZKPs, which involve more than that.
> "I have no idea what "set up your logs correctly" is supposed to mean but clearly no one's doing it. What is a "clean log"?"
As you admitted, passwords get logged "all the time." So the reasonable solution in most cases would be to ensure all applicable logs were clean and not logging passwords, not over-engineer a JavaScript-powered client-side-hashing algorithm. That's like taking a sledgehammer to a nail.
Sorry, but the idea that implementing "clean logs" is:
a) Tractable
b) Simple or straightforward
compared to client side hashing is absurd.
Client side hashing is a one time solution forever that exists in one place, requiring no 'cooperation' from other code to be safe. "Cleaning logs", which you still haven't defined at all, is going to be a constant maintenance burden that can break in any place where you log ie: absolutely fucking everywhere by absolutely fucking everyone.
And how exactly would having the hash in the log versus having the password in the log be an improvement? If someone gets that hash it's exactly the same as having the password, as it is the hash that gets sent to your server in the http request.
Edit: alright, I saw that you mentioned password reuse in another reply, fair enough, it does help against that.
Proper client-side hashing uses a nonce generated by the server. The transmitted hash is only valid in that session. Even if the nonce and the hashed product are logged, you can't reuse it, because the nonce will be different for your login attempt.
How complex and unusual is the authentication system you are working on?
If it was a consumer-facing web app, it's not like your password is logged in a million different places. It's possibly logged by your web server software (nginx in my case), and it's possibly logged by your web app framework's requests handler. It's not terribly hard to ensure that these points where the password passes through do not have passwords in their logs, or to ensure that said passwords are not logged to begin with.
If it was a massive enterprise system, I would hope that you would use a single-sign-on system with a centralized login page rather than exposing passwords to every web app within it. And then, just ensure that said passwords are not stored in logs. This is what I meant by clean logs - anywhere the password is used, ensure there is no record. How much code and how many layers does a password need to go through?
And yes, maybe client-side hashing does resolve some attacks, but I remain convinced that it is overkill and less protective than it initially seems. And in the future we would move to Zero-Knowledge Proofs but we aren't there yet for general users.
Plenty of very simple web apps terminate TLS at the edge ie: at something like API Gateway. So if you then turn on request logging... voila. Hardly a complex scenario, happens all the time. Or at the application layer:
@path("/login")
def login(request):
print("I have a bug, I'll just log the whole request real quick to see wtf is up!", request)
It's actually very hard to ensure that the password doesn't get logged. It requires constant discipline and maintenance. Any bug or change could expose it.
> I would hope that you would use a single-sign-on system with a centralized login page rather than exposing passwords to every web app within it
Implementing SSO is a great idea. Not everyone wants to use SSO though. If I were hosting a porn site I wouldn't expect my users to happily "sign in with Facebook". It's also much more work than a basic client hash.
> This is what I meant by clean logs - anywhere the password is used, ensure there is no record. How much code and how many layers does a password need to go through?
You underestimate the places logs can be generated or passwords can be accidentally persisted.
4. Audit logs for security, such as via ebpf or other auditing frameworks
5. The application, at any layer. In Python, did you know I can get a reference to the calling function? I've done this for logging purposes before, in fact. So even if your caller is super careful not to pass the password in, saving me from accidentally logging it, I can crawl back up the stack and get it anyway.
6. Your database logs
7. Services/ RPCs that sit between your auth API and your database
And as your business grows and your code changes you'll have to track all of that.
Orrrrrrrrr, you can just hash your password on the client side and significantly reduce the damage of a leak. Or put the extra work in to implement OPAQUE. Or use webauthn, that's cool too.
With public key crypto you can implement a challenge response. Server generates random garbage, send to client, client signs the garbage using priv key, sends it back as a hash that the server can verify using pub key.
Another version of this is with shared secrets instead of public/private, by replacing the signature with simply HMAC(secret, garbage) and keep rest of flow same as above.
Because many inputs map to that hash (the hashing function is surjective instead of bijective). People re-use passwords all the time. If the hash leaks it will only affect the particular service.
>As you admitted, passwords get logged "all the time." So the reasonable solution in most cases would be to ensure all applicable logs were clean and not logging passwords, not over-engineer a JavaScript-powered client-side-hashing algorithm. That's like taking a sledgehammer to a nail.
Maybe, but as a client I can't verify that this happens. So never sending the cleartext password is the only solution.
> 1. The JS could be loaded from a CDN, not the same service that has access to the password. You may have absolutely no control over the JS on the page.
You have full control over it, as long as you don't embed any third-party maps or crap like ads.
Zero knowledge proofs protect against the password leaking due to any kind of error from the server. That's not security theater, passwords leak all the time.
And the nice thing is that browsers already implement that, no need for Javascript. The bad news is that the UX sucks so much that you just can not use, so it's as useful as it not being there.
But digest password authentication protects against nearly none of those errors. On this case, leaking a digest is just as bad as leaking a password.
> Zero knowledge proofs protect against the password leaking due to any kind of error from the server. That's not security theater, passwords leak all the time.
That's a little confused. PAKE usually still stores a password or hashed password on the server, with the password potentially recoverable by brute force reversing the hash. What it avoids is transmitting a hash over the wire that can be reversed into a password, the way HTTP digest auth allows. The stuff transmitted over the wire by PAKE instead reveals no info about the password.
Most login pages on web sites today send the cleartext password underneath the https encryption layer, so the server sees the password (though hopefully stores it in only in hashed or MAC'd form). That is equivalent to http basic auth sent through https.
I use basic auth + https for my own stuff (where I don't care about styling) and it suffices for most things. Obviously you can escalate from there to 2fa, client certificates with credentials wrapped in hardware tokens, or whatever. I haven't needed that for personal stuff so far.
Most PAKE implementations actually make a security trade-off here - they protect the password transmitted over the wire, but you end up with a weaker password hash stored in the server's database.
Modern PAKE implementations, like SRP, don't store the plaintext password on the server side, but rather store a "verifier", which is essentially a hashed and salted version of the password. This way the server never sees the actual password, even at the registration phase.
The problem is with the hashing function though. For instance, all the SRP implementations I've seen use fast hash functions like SHA-1, SHA-256 or Blake2b by default. But contrary to folk wisdom (which is unfortunately often repeated here as well), hashing and salting a password is not enough. This is not 2003 anymore, and rainbow tables are not your main threat - your main threat is a cluster of fast GPUs demolishing your hashed passwords at rates that often just start at 1 GH/s.
The best practice nowadays is to use a function which is both computationally expensive and memory-hard such as scrypt or the newer Argon2 (and not PBKDF2!). You could very well do that with a PAKE, but now you run across a nasty UX trade-off: the computationally expensive function would have to be executed on the client side every time the client authenticates. There are WASM implementations of Argon2 out there, so this is probably not a big issue on a beefy PC, but you'll have to aim for the lowest common denominator here, and tune your function for a low-end smartphone.
tl;dr: In practice, with a carefully implemented PAKE, you'll give the attacker a 10-100 times faster hashrate than you would with a plain-text password authentication approach implemented with the same amount of care. In real practice, you'll probably use whatever defaults your library gives you and give often end up with a ridiculously weak hash.
Now, I said this is a trade-off. If you implement PAKE well, you will reduce the resiliency of your stored password hashes against brute force attacks, but this is what you get in return:
* Protection against password sniffing at TLS-terminating proxies
* Protection against hackers taking over your server and stealing user passwords as they login (Online password leak)
* Protection against MITM with a stolen CA key[1]
All of these things are still an issue with clear-text passwords even if you're using TLS. If any of these issues are a concern for you, this means you consider your users' passwords to be significantly more sensitive than the data that flows between your clients and servers, but that could be a valid threat model. In this case PAKE looks like a nice solution.
For everything else, I don't recommend PAKE. It is harder to implement correctly (with library defaults being insecure as I mentioned above), and this is more important than whatever theoretical strengths it has. Cryptographic systems are generally broken because of incorrect implementation rather than theoretical weaknesses in the algorithm.
PAKE is highly phishing resistant. If you type your password for an important website into a browser-controlled PAKE UI, but you’re being phished and the browser tries to authenticate to a malicious website, the worst the website can do is guess one single password. It can’t relay the password to the real website.
Good point that it protects against a phishing site that exactly replicates the victim site but with a different URL.
But the phishers could do a slight variation. They could create a website that looks very similar to the browser's Basic Auth popup, but implemented in HTML and Javascript. Most people won't notice the difference. Most people don't understand the line of death[1].
In the context of the pop-up: a simple pop up can be faked. But what if the browser would flash all the borders (and other stuff outside the line of death) when the real popup is displayed?
I'm not saying any of this is 100% foolproof, just that we should be doing some UI experiments on real people to see what works better.
This is what was done in the EROS [1] (extremely reliable operating system) UIs - it was not possible for user window to be rendered completely undistinguishable from system windows like, you knew it, a password prompt.
This is also what a secure attention key is for. Sadly the well known implementation (Windows NT) made it sufficiently obnoxious that it went away.
I can imagine keyboards having a special “password” key and trying to train people that all passwords start with the password key. I don’t know if this would work, but it can’t be worse than Ctrl-alt-delete.
- If you share your password across sites that used a hypothetical browser-implemented PAKE, site A cannot login to your account on site-B
- If a site is attacked, there is no risk that password material was extracted from application memory — site operator can dump session tokens and safely re-auth users.
This sounds like an interesting system, but I think you are arguing for a system that as-of-now is only theoretical and is very different against the current crop of JavaScript-powered client-side-hashing methods which I am arguing against.
The first implementation was Bellovin and Merritt's Encrypted Key Exchange in 1992. In 2000 a provably secure implementation was released. PAKE has been around for quite some time and is proven, and is in wide use in the field. Here's a decent article on the subject: https://blog.cryptographyengineering.com/2018/10/19/lets-tal...
I don't dispute the technology exists - I dispute that this technology can be deployed on a web app to general users effectively. I don't believe that is currently possible in production effectively in a way that neutralizes my arguments that an attacker could just change the JavaScript to record passwords somewhere.
That's why the browser's built-in login Form could be so useful, it would have its own security context, so I, as a user can be sure that no Javascript could read that
I would happily sign up and reuse a password for a website that I didn't trust if it were as secure as described (PAKE + browser built-in login)
I'm really not sure what you are disputing here. PAKEs are for preventing man in the middle attacks, not for securing a local program or preventing malicious code from running in the browser. No one is advocating, use a PAKE and all your problems are gone - it's about addressing key exchange over the wire and eliminating an entire class of attack. Password managers are a tough subject - they are like coffee . Yes, there may be small amounts of toxic stuff in it, but that is offset by a factor of 10,000 by the number of people who do not get in a wreck on the way to work, thanks to being awake and aware form 70mg of caffeine.
PAKE is theoretical from a web-development perspective because there is no secure way for me to implement PAKE in my web app for a user to log in with in 2022. It doesn't exist - you tell me how to implement PAKE login right now.
Zero Knowledge Proofs would be awesome (something like WebAuthn/FIDO right now?) but I am arguing more in general against client-side hashing methods that are currently usable, mainly JavaScript-based ones.
I meant, let's say I picked a framework. Django, Laravel, Express, similar.
How would I implement PAKE login for my users? How would I get every user to log in with PAKE? How would I ensure that my code for PAKE could not be overwritten if a hacker took over part of my server?
It doesn't exist, AFAIK, on a user-facing front at this time in a secure way that a hacker couldn't just change the logic for.
Implementing opaque isn't overly hard. You can find pseudocode, implementations, state machine diagrams, etc, online. Here's a good post that links to implemented code:
https://blog.cloudflare.com/opaque-oblivious-passwords/
I know nothing of framework support, I don't use frameworks. I'm likely going to contribute some open source code in the near future from my company to simplify things though.
But you could also just have your client do:
salt = sha256("your company name goes here" + username)
password_hash = pbkdf2(plaintext_password, salt)
and get some nice benefits.
> How would I ensure that my code for PAKE could not be overwritten if a hacker took over part of my server?
Depends on the server and the level of control. But it'll help in a number of cases. You're assuming the attacker has full control over the web-page's contents (among other things - even if an attacker had the web page's contents CORS means they couldn't send http only cookies to an attacker controlled server), which is a very specific, powerful position to be in.
You are still exposing the password_hash to the server and any compromise there (software or hardware, as described in your link) would still let an attacker grab password_hash, craft a custom client, and send it as if the original client had hashed the plaintext_password to begin with.
The attacker doesn't need to know plaintext_password, just the string you use to authenticate with in order to replay it. The password_hash becomes the new password.
Then due to the salt being on the client, it still opens the password up to rainbow table attacks etc.
If the attacker only has access to the hash that hash is only usable for your website. If the user uses the same password for another site an attacker can not log into that other site using the hash.
That's really the main benefit of this approach - it reduces the impact of password reuse.
If I’m understanding your argument correctly (I may not be) - implementing PAKE would only be helpful in a scenario where an attacker gets access to hashed passwords, but isn’t able to modify front-end code to directly intercept unhashed passwords, right?
Gotcha - and I can definitely see the utility with a large userbase.
From a corporate perspective, with a segmented + well-firewalled architecture, and a lot of surface area for injection vulns, I totally agree with you. The article was priming me to think of a flat, single-box solodev environment, where if someone breaks in, they own everything - which is why I think the original post above us mentioning PAKE is getting a lot of questioning.
PAKE isn’t about preventing compromise at the serving of the front end. That’s still the responsibility of the server maintainer. PAKE is about reducing damage potential. The responsibility of keeping your servers secure applies regardless of use of PAKE. PAKE just makes it possible that if your database is leaked in some smash-and-grab someone can’t just run a rainbow table against it to suss out passwords against emails.
If someone gets a copy of the encrypted traffic - and we know that 'full take' is being done routinely for some parts of the internet - the credential in the plaintext means that if they are ever able to decrypt it even weeks later, they can make fresh connections using the valid credential afterwards.
If the server issues a different challenge each time, decrypting one response doesn't buy you anything.
I guess this makes sense - but who is capable of decrypting it weeks later unless the original private key was stolen, in which case it could be decrypted almost in real-time? Maybe if you were concerned a nation-state or something was trying to get your private key, but you've got bigger fish to fry at that point.
Also, using TLS with Perfect Forward Secrecy (all modern ciphers), it's not possible to just capture the traffic and dectypt it later. You have to know the private key and do MITM.
Which can safely be assumed to happen whenever the equipment is provided by another party such as an employer. All in the name of security, just not that of the user.
> If I was a hacker, I can add JavaScript to send plaintext somewhere. If I was a hacker, I could change the login form to stop hashing them client-side and instead send them over to the server for hashing and inspection.
If a zero knowledge (ZK) system combined with HTTP Basic was used, then account entry would come from the browser itself and not a web form that could be intercepted by JavaScript.
Further, a ZK system would help with the silliness of folks using bad algorithms (straight MD-5 / SHA-1) to store passwords, or even storing them in plain-text.
Right - it would, I don't disagree, it'd be awesome. It's something like WebAuthn.
I'm arguing here more against some people who think that using a JavaScript-based system to hash the password entry before sending it to the server is a good idea.
> I'm arguing here more against some people who think that using a JavaScript-based system to hash the password entry before sending it to the server is a good idea.
So you're arguing against something nobody is arguing for?
I could be missing something but they are not suggesting to use a javascript version but the version where the zero knowledge algorithm is used at the browser level. the password would never be outside the secure context of the browser and nothing other than the algorithm bits would be sent.
Credential stuffing is a thing. So it’s pretty good if a compromised website just cannot leak your password.
But as the other comment pointed out the current situation with web form login is that password is sent to server so it wouldn’t be worse than the the status quo.
A compromised website that was well designed should not leak your password any more than a client-side hashing implementation. This is because the passwords are hashed in the database. Client-side hashing means that, yes, initially the website is not receiving plaintext passwords, but a few quick code edits to maybe add some logging JavaScript or disable the client-side hashing implementation will fix that.
And credential stuffing? Client-side hashing does absolutely nothing to prevent credential stuffing other than that you may need a GPU to do a lot of hashes quickly. Client-side hashing doesn't make a server handle more or less authentication requests.
Ah yes, "if nobody makes any mistake there's no problem", that's worked so well forever hasn't it?
> Client-side hashing means that, yes, initially the website is not receiving plaintext passwords, but a few quick code edits to maybe add some logging JavaScript or disable the client-side hashing implementation will fix that.
That makes quite literally no sense, did you miss the entire thing and go off with whatever?
The request here is to make the browser's support for HTTP authentication better. The entire point is that there is no "quick code edit" without owning the entire browser at which point you're quite thoroughly owned anyway.
Scenario 1: Attacker compromises ECOMMERCE_SITE where you have a login. The ECOMMERCE_SITE uses md5 for logins, so the attacker just brute-forces the hash and then uses that password to compromise your logins on other sites.
Scenario 2: The ecommerce site has upgraded to SHA512, so cracking isnt an option. But the site is relying on basic auth, so the attacker simply sniffs your password when you auth.
Scenario 3: the ecommerce site is using a secure zero-knowledge auth against a hashed/salted/peppered/whatever credential. They cannot brute force it, and the server never sees your password. They can mess around with the ECOMMERCE_SITE but cannot pivot to any of your other logins.
>If I was a hacker, I can add JavaScript to send plaintext somewhere.
We've just shifted from "quiet, persistent threat" to "hacker announces to the world that he's in". Changing javascript on a prod website is going to trigger alarms.
> We've just shifted from "quiet, persistent threat" to "hacker announces to the world that he's in". Changing javascript on a prod website is going to trigger alarms.
While I'd like that to be true, it really isn't. There have been loads of card skimming operations injected into production sites which weren't noticed for sometimes months.[0][1][2][3]
>Yes - it does - but I am having a hard time thinking that this actually matters in the real world.
In the 90s and early aughts it was fashionable for php sites to store passwords hashed, usually some combination of a salt with md5 and later the SHA variants.
For a brief few years there were entire communities on IRC and elsewhere dedicated to making rainbow tables for cracking stolen password hashes.
Often the salt would be common across all passwords, so if you got a database dump it was a gold mine for credentials.
bcrypt solves that in two ways. It uses per password unique salts and it has a tunable cost-parameter to deliberately slow down the computation, making it slow and infeasible to build a rainbow table.
I don't think this is a major concern here, many websites pass username and password in plain text and rely on https for security (Just take a look at HN's log in form). If I'm not wrong, J2EE's login is implemented using plaint text also for password.
Don't such methods require you to store the password in a non-hashed form on the server? That seems mostly worse than this approach, since now the passwords of all users are visible at rest and can be used directly as authentication. RADIUS I know commonly uses this approach and it always made me nervous having the passwords in plaintext.
Server sends client a salt, client hashes the salt and password and sends back to server.
Implement this as a built-in feature of the web browser, and the browser can show a special icon or symbol to mark that the password will be sent hashed (and later show a warning on password fields sent via plaintext).
1) A different salt each time, meaning the server must know your plaintext password to validate, or
2) The same salt every time, in which case the hash is essentially the password since that's all the attacker has to pass to the server next time.
Could that really work? Sounds like it's highly abusable if someone compromises the database and gets a list of all the hashes. Now, they don't even need to use rainbow tables or any brute force to compute the password. They just send the hash to the server and will be logged in.
Yes, if an attacker compromises the database they can send the direct hash.
The point is that a malicious or badly-secured site can't use your password on other websites, because ultimately most people use the same password on many different sites.
The salt-and-hash combo is how email logins worked for decades. The problem with this approach is that what you really want different salts, which requires that the server knows the plain text password.
You've responded as if the bit you've quoted is advice to individual developers in the present context, but the topic of conversation was about extending browsers so that the standard login form would do this (... better than existing auth digest). If we did that and people were used to using the browser's built-in login dialog, and (as with https) we made it visible what security features were enabled, then a trivial server-side change wouldn't compromise user passwords.
Right... that would be awesome. But it would still be susceptible to a hacker replacing it with a traditional login page with some logging... unless somehow you could prevent any traditional web pages from working.
I beg to differ. You'd be surprised how many users probably would not recognize any difference whatsoever as long as the hacker got things looking relatively identical.
Some users will notice. If we've succeeded to the point of "traditional login pages" being unusual and remarkable, many users will notice. Probably in any case some users will fail to notice.
In any case, though, it seems likely to reduce the impact of a compromise.
Except you usually send the password once, and get back a cookie with a token that you present for the rest of your session (possibly to other endpoints/servers). Whereas with basic auth, the login/password keeps being sent.
Please do not add logo customization. This will end with prompts asking you for your Apple and Microsoft passwords which are difficult to differentiate from official prompts (at least for end users).
Thank you! This is a deeply underappreciated point.
I understand how annoying, painful, and disruptive to the user experience it is to have part of your app taken over by browser or OS default widgets and behavior is. It's sheer hell on UX. Yet the custom styling that would make it integrate smoothly would be a godsend for attackers and phishers.
The role that UX - and misuses of UX - play in security is often not considered deeply by highly sophisticated users.
Is this actually hell on UX or is that something product people and designers tell us because it’s in their personal interest to create custom solutions for every webapp? iOS apps pretty regularly defer user actions to OS level controls and prompts and frankly I believe the UX there is far superior to webapps using bespoke UX for these. Wouldn’t standardised browser behaviour and OS level styling for common flows and behaviour be a big win for webapps and take some of the drudgery out of web development?
Like you, I suspect it's not, but I've found that leading with telling people that the thing they deeply want to believe is wrong is a poor way to convince them of things. You have to go through a ritualistic performance of empathy first.
What prevents the hacker from cloning the whole web-page of, say, facebook.com login and phish users for credentials this way? This is not a hyphotetical thing, Kali Linux even bundles a utility program for that.
Compared to that, one icon, that is the same as that of the company, is not that threatening.
Not only that, but if you consider a sign in form that wouldn't have a logo, it would be way easier to trick user into putting their credentials in, because the user wouldn't be able to differentiate them. Also OAuth is always branded AFAIK.
Users may also notice discrepancies in the logo, if it was cloned poorly. Though I can't think of a way someone couldn't forge a logo given all the possibilities. Adobe Illustrator can trace images into svg and there's plenty of companies' svg logos just in the google search.
Verification of what, precisely? That it's authorized for the domain it's on? This seems like a re-implementation of existing access control systems, with the same weakness of being vulnerable to being copied.
Unless you're somehow able to get everyone to transition at once, never support a non-NFT fallback and, ensure that no sufficiently visually similar logo ever gets registered... which IMO sounds both challenging and like a hacky re-implementation of a trademark system.
This would be verification that any branding on the HTTP Basic Auth popup represents the company it claims to. This would be optional — any site that didn’t use this would still have the default (generic/unbranded) browser behavior.
The stated (and valid) concern is that malicious actors would use fraudulent branding on those browser auth popups — let’s tell the user we are MSFT or AAPL and steal their password.
One solution is for the branding to consist entirely of NFT assets that can all be tracked to a definitive owner, and use some DNS-based glue (ala DKIM/SPIF for email) to link the NFT to the TLD.
Then your browser can refuse to show the MSFT logo (and show a big red fraud alert page) if the owner of the branding can’t be reliably traced back to Microsoft (owner of the site).
At this point you're talking about checking a signature. Naively, it seems to me like you could skip the NFT entirely and embed a verifiable sig in a securely delivered DNS record. Then it's linked to the domain. You can do this now, with tooling that already exists and is deployable today. You'd need to deliver the record securely to avoid attacks on the glue anyway.
Of course, neither the NFT nor the sig-in-DNS approach actually solves the problem of a visually identical but technically different image (use a slightly different color in a few places, etc.) being used to trick people. I'm not sure what we've gained. The malicious use case would seem like it's not effectively prevented.
There probably is, provided you can get a perceptual model that correctly models all human visual perception and reduce it to a hash. I am not sure one exists currently. Until then, it really does seem like we're trying to find a way to re-implement trademarks in a way that doesn't require the interpretive work that trademarks rely on.
I suspect there's a lot of complexity hidden in the perceptual model requirement, though.
There's a lot more to that. A bank doesn't want the "back" button to work forever; they want to control the lifetime of your session, ideally on the server. Google wants to let you sign into multiple accounts on the same origin. Many others want to have seamless single sign-on across several of their web properties. Sometimes, you want the change of your password to invalidate other sessions (say, when recovering a compromised account); other times, you don't want to kick out your smart thermostat and have to set it up from scratch.
Admittedly, there are some simple use cases where HTTP auth is all you need, but it's just way too inflexible, unless you turn it into some mammoth spec that is never going to be as flexible and tempting as managing user identity yourself.
Especially since HTTP auth doesn't actually mean you can stop doing that anyway. You're still handling account creation, password checking, all the abuse / bot detection bits... all you're getting rid of is the sign-on and logout functionality, which is really not that complicated to begin with.
Maybe we don't want them to be able to do any of that
And I think you are missing the point, the goal it's not to standardize logins, it's about making impossible for servers to know my password, hence impossible passwords leaks
That would allow people to reuse strong passwords, and not need passwords managers, because that's what they are doing anyway!
> Maybe we don't want them to be able to do any of that
"We" who? Application owners want that, browser vendors want that (their greatest fear is that mobile will eat the web, so they don't want to make the platform less flexible)... and users generally don't mind.
> impossible for servers to know my password, hence impossible passwords leaks
That would require deeper architectural changes to HTTP auth, but is probably a reasonable goal. That said, it's more readily approximated with unique passwords + having a good password manager. The main risk of password leaks is not that they make that particular breach worse (since the attackers can just grab your data), but that passwords are reused too often.
Federated login is another approximation, where the password is only known to your identity provider, not to every identity consumer. It's modestly successful for some lower-value services.
Depends if that is required. For most enterprise software, that nowadays is more and more web based, you don't need all of that. Accounts are created by the system administrator, the password check is fine with the default mechanism of Nginx or Apache with a .htpasswd file, bot detection and all other things are not really that necessary, especially if the page is not exposed to the internet but only in a LAN.
Beside that, if you need a more sophisticated authentication mechanism nowadays your default is to go with something that uses the Oauth protocol: so I guess the next step would be to standardize that protocol and have it integrated as a browser API so that a user doesn't even have to insert a password.
I love it too, when it works... When it doesn't, the users get the login prompt like the basic auth one, which can be confusing because you have no opportunity to add information to that prompt.
The password is still revealed to the server. There are password verification protocols where the password is not revealed to the server either, which is much more secure as it means that you’re not at the mercy of whether the sever operator follows good security practices about not saving your password in plain text somewhere.
> There are password verification protocols where the password is not revealed to the server either, which is much more secure as it means that you’re not at the mercy of whether the sever operator follows good security practices about not saving your password in plain text somewhere.
Well, the most common such protocol is TOTP, which still requires the server to store your full password. In that sense it's worse than a naive password exchange, which only requires the server to store your hashed password.
There are other password protocols that require neither transmission nor server retention of the full password, but it seems worth noting that the protocol we actually have didn't bother with that.
As every other login form does. In this situation basic authentication is no different from every other authentication mechanism that uses passwords...
Not necessarily, you can set up TLS client auth with signed certificates, generated locally, that the server never receives the private key of. The server can validate and authenticate without knowledge of the secret, and cannot impersonate the user.
The problem with TLS client certs, of course, is the fact you need a method of signing CSR's, and the terrible UX modern browsers have for client auth, especially on mobile.
Yes in theory you can but it's a pain, a certificate needs to be installed in the browser, the user gets a popup every time saying that you want to use a certificate, when the user changes computer either needs to backup the certificate or install it again, etc. My bank did that in the past and they no longer support that, instead they opted for a classical multiple factor authentication: password + authorize the access with your phone.
Only partially. If the client and server have an agreement on a hashing protocol, there’s no reason that the browser shouldn’t be able to hash as well and prevent the password from ever leaving memory on the client system. HTTPS is still vulnerable to many man in the middle attacks, and many corporate and business networks do deep packet inspection to decrypt https (they control the machines so intercepting the cert issuance and installing their own root CA is easily doable). Another issue is that improper logging on the server (or depending on the implementation, even intermediate load balancers) could accidentally leak the plaintext password. Client side hashing is a much better solution to this, and if it’s a native browser supported protocol, it would even work without JS.
What does it matter? If a criminal gains the hash, they can log in and be malicious anyway. If a criminal can do a MitM, they can substitute the Javascript that's hashing your password with all the nonces and salts and peppers you add to it and send the password anyway.
If you just hash the password, the hash becomes the password. You're not solving the problem that way, you're just switching around definitions.
There is an alternative that's supported in most platforms, and that's WebAuthn. Not even a need for a password automatic secure handshakes and with the right protocols, the keys differ per site so they're very hard to spoof through phishing. You can achieve much of the same thing with client TLS certificates, but the UX for that is even worse than the UX for HTTP Basic.
But the attacker controls the TLS connection already, so they'll just strip out the hashing functionality or send a piece of JS to steal the password directly from the password field.
It's not that I don't understand where you're coming from (I once almost started writing such a library a few years back!), but I just can't think of a threat model where this makes sense. That's also what moved me away from working on such a system.
To me, this approach feels like an attempt to recreate software-only U2F, but outside the browser. I don't think client side code can fix these problems. It can make stealing passwords more difficult for criminals if every website uses their own bespoke password processing script, but that'll also add a huge attack surface to your code and it'll be a burden to maintain.
I don't know why people are assuming the attacker:
A) Controls the code running the auth API
B) Controls the javascript
As if that's the common attack.
The common attack is that the attacker has a read on the hash, either through injection vulnerabilities or other leaks.
Your attacker, as described, has remote code execution on a server that hosts the auth API and the Javascript in one place. That is a very specific, powerful attacker!
> but I just can't think of a threat model where this makes sense
1. An attacker has CSRF and can trick your code into sending the hash to them. This significantly reduces the harm of that attack.
2. An attacker has an injection vulnerability giving them read access to hashes ie: sql injection, perhaps the most significant and relevant attack to discuss with passwords.
3. An attacker takes advantage of a timing attack, bruteforcing the server and measuring response times to leak its value. They can now only leak the hashed value, which is going to take way longer to leak and doesn't expose their password.
And more!
This defense tackles what are very very arguably the major threats to consider with regards to password security (they're the big ones in OWASP top 10).
> It can make stealing passwords more difficult for criminals if every website uses their own bespoke password processing script, but that'll also add a huge attack surface to your code and it'll be a burden to maintain.
You can significantly reduce harm and attack surface with all of 3 lines in your frontend code.
salt = hash(username + static_salt)
password_hash = pbkdf2(plaintext_password, salt)
it's trivial to implement this.
I'll grant you that PAKEs may be more complex to implement today, but even if we talk about a very basic implementation of client side hashing I think there's obvious value.
The attacker I describe has MITM access. The attacker can read and modify page contents, but nothing on the server side. This can only be achieved by malware on the client side or tomfoolery with certificates, so it's not a common attack for sure. But, breaking basic passwords sent over a secure channel aren't a common attack in general.
If you presume the attacker can only read the data transmitted but cannot alter it, your system might work, but I'm not sure in what scenario a hacker can break HTTPS secrecy without also being able to modify the contents of traffic over the wire.
A CSRF vulnerability won't let you send the password to a random host, unless you have full arbitrary code execution (in which case your protections don't make sense either) or if your auth code is unrealistically buggy (letting the attacker embed secrets in a resource somehow).
The database already contains hashes for normal password auth, so I'm not sure why your system would be any better. The password database isn't stored client side, after all, and I hope nobody is still storing passwords in plaintext.
I'm not sure why an attacker would be able to guess the hash from a timing attack, if they can do that then the hashing implementation is very flawed, to the point you just shouldn't be hashing passwords with it.
Your custom salt/hashing system solves password reuse I suppose, but it doesn't add any protections to your website while adding complexity at your cost. For your website, you just changed the way the password looks (which is the hash, not the direct input) at the cost of needing Javascript execution.
In my opinion, your login page would be a lot more secure with a CSP that disallows all scripting, just in case, and uses a simple system that's easy to spot mistakes in, like HTTPS POST or Basic auth.
> The attacker I describe has MITM access. The attacker can read and modify page contents, but nothing on the server side. This can only be achieved by malware on the client side or tomfoolery with certificates, so it's not a common attack for sure. But, breaking basic passwords sent over a secure channel aren't a common attack in general.
I don't think that attacker is worth dealing with. At that point you're outside of the security responsibilities of a website and it's the operating systems job to provide security.
> But, breaking basic passwords sent over a secure channel aren't a common attack in general.
Isn't it? Lots of passwords are sent over TLS but SQL injection is still a top 10 vulnerability.
> a hacker can break HTTPS secrecy without also being able to modify the contents of traffic over the wire.
I'm not implying that they can. I'm implying they can read the hashed values after being transmitted to the server.
So attacker in the following positions, at least:
1. Owns your auth endpoint (but not your CDN)
2. Owns your proxy
3. Has read access to your logs and the password is in those logs
4. Has a SQL injection or timing attack against your password auth/ db
> A CSRF vulnerability won't let you send the password to a random host, unless you have full arbitrary code execution
Yeah true, I was thinking about the auth token, not the password.
> I'm not sure why an attacker would be able to guess the hash from a timing attack, if they can do that then the hashing implementation is very flawed,
Timing attacks aren't a property of the hash but of the operations on that hash.
> The database already contains hashes for normal password auth, so I'm not sure why your system would be any better.
Do you mean that the database would be doing server side hashing regardless? That's true (I sure hope). But the attacker will have to brute force a much larger space to recover the plaintext password and the point of doing the client side hashing is to protect other sites if a user reuses their password on those sites.
> Your custom salt/hashing system solves password reuse I suppose
To be clear, that's the point, and I don't think that's small. It's about reducing harm to your users - even if within the scope of your website your user is still vulnerable you are protecting them within the scope of other websites.
I haven't made this argument yet, but I believe it also adds security elsewhere by distributing the cost of your hashing to clients. Your server has to handle N clients, and maybe needs to response in 5ms to each client - so it can spend, say, 3ms on hashing. To keep your tail latencies down you might try to do 1ms of hashing.
But you don't have to worry about your compute if you push it to the client. You can have the client perform, say, 1M rounds of PBKDF2.
So the attacker has two choices.
1. Brute force the client hash, which is 32bytes and really not feasible. That is to say that in a naive brute force they start with 32bytes 0'd out and start counting until they reach your hash - not fun.
2. Brute force the client password, which has a huge number of rounds that you could never get away with on your server. 1M rounds of pbkdf2 is not something you'd want to do on your server but distributed across your clients it's no problem at all - a few hundred milliseconds perhaps. But that's devastating to an attacker trying to recover the plaintext - of course, with some caveats (a relatively weak salt).
I haven't put enough thought into this benefit to claim it, but I may as well throw it out there.
> but it doesn't add any protections to your website while adding complexity at your cost.
It's very little complexity at very little cost. It's a few lines of code that execute at the edge - you pay nothing for those cycles.
> at the cost of needing Javascript execution.
I don't consider this a cost. I think it's totally ridiculous to say that JS execution as a requirement is a "cost" - the vast majority of websites require JS, and there are plenty of good reasons for it (like telling your user their password is too short). If you care so much about JS as a cost, ok, don't do client side hashing.
> In my opinion, your login page would be a lot more secure with a CSP that disallows all scripting, just in case, and uses a simple system that's easy to spot mistakes in, like HTTPS POST or Basic auth.
I disagree. A CSP would be an excellent thing to implement, and everyone should do so. But I don't think that completely denying script execution is a good idea - your users are far more vulnerable to using weak and/or reused passwords than XSS on a site with the minimal scripting necessary to implement this code (no 3rd party packages are required for the code I mentioned).
Your suggestion is basically going back to the days where databases stored plaintext passwords in the database, just that the plaintext happens to be a hash.
> If the client and server have an agreement on a hashing protocol, there’s no reason that the browser shouldn’t be able to hash as well and prevent the password from ever leaving memory on the client system.
Shouldn’t it be a proper challenge/response? Otherwise the hash is barely better than the password.
There are answers to this. I asked myself this a few years ago, which led recursively to new tricky questions. I eventually solved the tree of the questions: I designed myself into a solution that I later realized was essentially an asymmetric PAKE (e.g., analogous to OPAQUE).
So, an attempt to solve your question eventually leads to using an asymetric PAKE. But, this is way, way more complex. You'd really have to squint your eyes to believe that the diminishing returns are worth it.
I don't really see it simplifying a lot of internet. Everyone will make his custom login page anyway, you need to add links to registration, password recovery and so on. And sending password on every request, hashed or not, is just bad security, you need session token anyway.
I think second party controls are probably the reason why browser intermediated login was never pursued until the present era of password management--the browser is a third party that can facilitate/intermediate communication between the first party (user) and second party (website). It would be foolish for a website operator to hand their users over to Microsoft back in the day, just as "social login" is a convenience/optimization trap today.
Don’t even stop there. Instead of typing in a username and password, have my browser give me a drop down with my identities so I can use just one of them. Then have the browser sync my (encrypted with my pass phrase) identities across all my browsers/devices. No more passwords.
> Add a button to log out. Logout never really worked across browsers with basic auth.
In my experience, the way to logout is to close the browser. Is this standardized anywhere?
> Allow to inject a logo or a tiny bit of customization for branding. The default popup looks too ugly.
I think that this would be nice, but most anyone who wants customization will want to control everything, and wouldn't be a fit for the limits of basic auth.
> Improve the Digest auth to modern crypto standards. Stop passing plain passwords over the wire.
As others have mentioned, TLS helps with this, but I agree that it would be a good idea to hash it anyway.
Have you thought about submitting these improvements to chromium and firefox (as feature requests)?
Perhaps it would be even more useful if so-called "modern" browsers implemented SRP support. Some would agree SRP is better than HTTP Basic Authentication.
SRP allows for TLS without server (or client) certificates. The presence of the verifier on the server provides authentication. No passwords are sent over the wire.
The conventional use for SRP is a replacement for weak passwords (e.g., HTTP Authentication), but what would stop SRP from being used in some cases as a replacement for server certificates even where a website is public (no passwrd required). With SRP, the user becomes the one who is in control of "trust", not a third party certificate issuer or browser vendor. The user decides whether to send a verifier to a website operator.
Imagine a HTTP Basic Auth workflow that has another "Token: ..." header that is based on TOTP. This would help make things so much better compared to what's the situation right now.
The stuff we see nowadays is mostly hacks that upgrade legacy systems with things like "the password is actually your-password#token" and "oh yeah, if you use # in your password, it crashes the server ... so don't do that".
Something like a standardized HTTP-based authentication workflow (Basic Auth + maybe parts of the Web AuthN spec) could make things so much easier in regards to maintainability. Then we could finally get rid of stupid workarounds like JWT which weren't designed for this purpose.
That wouldn't be basic authentication any more which is by definition username+password. HTTP currently already supprots other authentication types besides basic auth, such as digest and bearer authentication. Bearer is rather close to what you describe.
Digest auth is fundamentally less secure, as it requires the server to have access to the plain text password, whereas with basic auth you can store it salted and hashed. And you should be using HTTPS regardless.
And most likely won't, because browser vendors seem to be extremely reluctant to do anything but deprecate all those standard UIs in favor of messed up JS APIs.
Is there any chance to go even further than this? I'm imaging a public key based authentication scheme.
The user submit their public key to the server first, then in the feature logins, server will generate a challenge for client to decrypt and respond.
Of course the browser can apply some UX magic at the client end, for example displaying a pop window to allow user to select a public key for the authentication process, etc.
Yes, but the advantage is that the user sends their own public keys and they can switch it freely (and preferably easily, user click "login", a window pop up, user select a public key, done) at will. While client certificates is currently managed fully by the browser, and you need to adjust your HTTPS infrastructure in order to enable the feature.
Another thing that's missing is multiple credentials stored in by the browser. With HTML login forms browsers can offer multiple choice of autocompletion with stored credentials. With Basic HTTP auth it doesn't work, it only offers the first one and you have to type both the login and the password manually for other accounts even if your browsers knows about them.
Upon account creation or reset, can't the user select a password, that hashed as the private key for a keypair and just the public sent to the server? Then when logging in they just sign a challenge on the client. Only thing stored is a users public key on the server.
The argument of passing plain password over the wire doesn't make sense: every login form when you submit it passes username and password over the wire! Nobody encrypts the passwords client side.
Caddy comes with basic auth support because it's still useful for a lot of use cases.
IMO the biggest weakness of basicauth (when deployed over TLS) is the fact that most server configurations store the passwords in plaintext, usually in a config file. This is like storing passwords in plaintext in a database. Caddy does not allow this. You have to use a secure hash on the password before adding it to your config: https://caddyserver.com/docs/modules/http.authentication.pro...
Of course, password hashes are slow, so KDF'ing a plaintext string at every HTTP request can grind even powerful servers to a halt. So Caddy can optionally cache hash results in memory (we do expect memory to be safer than a config file -- and Go is a memory-safe language in this regard). And while this can introduce nuanced timing variances (fast if recently hashed), they do not necessarily correspond to correct passwords.
If you think this stuff is interesting and want to help make Caddy's basic auth even better, feel free to contribute or sponsor: https://github.com/caddyserver/caddy
happy new year. Caddy is such a great piece of software. It has great defaults 'out of the box' but at the same time doesn't feel like I'm drowning in incomprehensible magic. We need more software like this!
Caddy is truly amazing. Thanks for that. Tip for me would be to improve the documentation and add more examples, just to get the hang on what the Caddyfile philosophy is. I've spent literally hours just to come up with a single line in a Caddyfile (and in the end it worked).
I’m confused by this. If you have the username, the hashed password, the salt, and the algorithm used, all present in the config file, then what’s the difference between doing what caddy does here and just having the plaintext password in the config file? Isn’t that just the same thing but with extra (known) steps?
Not trying to be intentionally dense, genuine confusion/question.
If an attacker compromises your stored plaintext passwords, they can test those username+password pairs against other online services, since lots of users reuse passwords. If you store hashes instead of the actual passwords, the attacker doesn't get the user's credentials.
You have to use hash functions appropriate for passwords (like bcrypt; don't use hashes like sha256 for this). And if you hashed correctly, it's still practical for an attacker to brute-force simple/common passwords. But at least users with hard-to-guess passwords get protected from your breach facilitating credential stuffing attacks.
> It's essentially impossible to reverse the hashed password.
It’s essentially a nontrivial unit of work to brute force the hash (or exploit collisions for a compromised hashing algo). That’s not impossible but it’s important not to overstate its safety too.
... for one account. It takes N times as many more resources to do that for N more accounts. The attacks are impractically expensive for most attackers.
It means that if someone sees your config file, they can't log into your sites and impersonate users (unless they have a LOT of money/resources and can reverse secure hashes -- one account at a time). It makes attacks impractically expensive.
You can do either. To Caddy, "entire domain" or "specific endpoints" are all the same thanks to request matchers. You can precisely customize which requests have basic auth applied to them: https://caddyserver.com/docs/caddyfile/matchers
It can be used for either. Depends what you'd like to do. Maybe the domain is admin.mysite.com and so you want to wall the whole thing off. I've used it for specific endpoints as well though, like to protect certain folders of a file server.
I find HTTP basic auth very useful to "protect" gitlab, wiki, forum, et al. internal resources exposed to the internet. With a simple apache/nginx config it is possible effectively to hide those from the intenet and in addition to their built-in authentications (ldap-based) have a fense reliable enough to prevent zero-day vulnerabilities of these populular web applications. Having them as sub-folders of a single web-domain (with proper rewrites added to apache/nginx) prevents numerous HTTP basic auth requests so authentication is required just once after the browser has been restarted.
I do the same, e.g. to expose static resources like API docs in an S3 bucket to the world (you can configure CloudFront to check Basic Auth). However, at some point you run into issues (doesn't work well with password managers, how do new team members learn the password, no easy way to rotate passwords when offboarding, etc.).
Now I want to upgrade to a proper OAuth wall. Some server needs to act as a reverse proxy that has permission to access the private resource but checks your identity as a, say, Google Apps user.
Assuming a private bucket on S3, what's the easiest way to accomplish this today?
I used vouch-proxy with nginx for something like this.
The nginx auth_request module authenticates each request against vouch-proxy before it executes the proxy_pass. Vouch-proxy can be configured to authenticate users against google apps or other oauth/iodc providers. And there are some options to pass along username, groups or other data as headers to the proxied service.
> doesn't work well with password managers, how to new team members learn the password, no easy way to rotate passwords when offboarding
This is where LDAP and similar are really strong. Unfortunately a lot of companies know that and charge big bucks for this simple feature, often hiding it behind "enterprise" subscriptions where you need to contact them for pricing.
It's also the reason why companies love Exchange and the rest of Microsoft's ecosystem.
Technically as far as I understand the client side certificates and signing/revoking them through internal CA solve the same problems.
However I have yet to encounter such setup used in a professional environment for humans. Is the complexity of such approach just too high compared to LDAP and the passwords?
Normal users (including some people with graduate degrees in computer science) can't manage client-side keys or certificates, as anyone who has ever had to support users using ssh key authentication knows. So then you have to provide functionality to do this for them in a foolproof and secure way, which is a big bite to chew.
Yeah, I figured the end-user support would fall under the abstract "complexity" I mentioned in my original comment. I can build a hotrod in my garage but nobody sane will use it for public transportation.
That's the thing though - I don't want the complexity of Exchange, LDAP, etc. Complexity kills. This is a simple problem calling for a straightforward solution.
Maybe combining Lambda integration with CloudFront (CF)?
You could intercept every HTTP request before it reaches CF, check auth data and decide to let it through or respond with 401 already. The CF auth password could be kept as an internal secret. You rotate temporary passwords on Lambda environment variables (bit insecure) or using AWS Secrets Manager (very safe).
Requests successfully authenticated on Lambda level gets rewritten with the master CF password to make them succeed there.
It's a lot more trouble than simply setting up basic auth, but you setup only once and theoretically it works.
I have a few webpages that have to be protected this way, due to an external client that only supports HTTP Auth. I’ve actually been pretty happy with this setup. I have nginx configured to use subauth authentication so I can still have external username/passwords stored in LDAP, etc. It’s been a surprisingly good combination.
A nice alternative to VPN (especially when you don't have a root to set it up) is using ssh tunnels: ssh user@server -D 1234 and Firefox with socks proxy set to 127.0.0.1:1234 (e.g. using Foxyproxy addon, though possible without it via the Firefox network settings). Then all Firefox trafic is exiting on the server side (including DNS requests).
It's great for keeping crawler bots out, and easy enough for humans to get past.
Once a user logs in, I set a cookie, and the user is not prompted for the auth again.
The beautiful thing about this scheme is that the cookie is always sent, so I can create a rule which bypasses auth when the cookie is present.
Basic Auth is one of the most supported features of HTTP, supported even by Mosaic. There's one Chrome release, I think 65.x, which screws it up when used together with gzip and requires a page reload after authenticating, but that's the only exception I know.
No need, I just angrily redirect users to FF52 (Firefox tends to be the last major browser maintained for older systems anyways). Plus, requiring a minimum of TLS 1.2 with specific PFS ciphers is now the requirement, nuking even the Windows Embedded POS version of Windows XP for Chrome (Firefox brings its own cryptographic libraries).
This is my preferred use of basic auth: just set the message to “enter anything for username and password” and accept anything. Search crawlers won’t be able to index the site, which protects me from haters finding my content, and RSS feed urls can just hardcore some u/p without consequence. It’s all the upsides of the modern web without any of the harmful behaviors enabled by global search engines.
>Search crawlers won’t be able to index the site, which protects me from haters finding my content, and RSS feed urls can just hardcore some u/p without consequence.
Sadly, I've seen at least one exception to this rule. Somehow, a search engine crawler wised up to my admin/admin captcha, and I had to change it.
> The beautiful thing about this scheme is that the cookie is always sent, so I can create a rule which bypasses auth when the cookie is present.
You don't even need the basic auth for that.
Years ago I needed to expose my pfsense WebGUI on the default HTTPS, but I didn't want it to be so obvious, so I made a couple of HAProxy rules, which allowed me to open https://pfsense.tld/open-sesame to set a cookie, after which I could open the default https://pfsense.tld/ just fine and see the WebGUI. Without the cookie there was just 404 for everyone.
It wasn't the best realisation (and some parts of webgui didn't like it) but it worked and allowed me to access it even on the smartphone.
Embracing security through obscurity like that is also how I decided to help protect my password manager, Vaultwarden. It's open to the internet on 80/443, but its URL is `subdomain.domain.tld/some-secret-path/`. It's dead simple, but indeed no unwanted visitors even see that site. Of course, even if they did, the regular login prompt with MFA appears.
It doesn't seem so. Clicking a random link from inside the Vaultwarden webpage (which is never used anyway, in favor of the Bitwarden browser plugins) and following the requests in Firefox's Browser Console, no request has a Referer HTTP request header. Vaultwarden does not send the Referer header cross-origin:
My home server uses Caddy and its JSON logs. These are incredibly easy to parse of course. Through the dynamic DNS solution I use (Docker image qmcgaw/ddns-updater), I have a list of all of my own IP addresses. Add to that others like my work's IPv4 block, and I get a collection of 'known', i.e. harmless IP addresses. Filtering these in a little pandas-based Python tool leaves all requests reaching the secret endpoint. Logs reach back around a week. Another tool 'enriches' each log entry with IP lookup info from ipinfo.io. Their free API tier is enough for my uses. That way, I can filter for request origin countries, hostnames, etc.
The entire pipeline is automated, but triggered manually on-demand. So far, no hits from unknown IPs to the endpoint!
Something I’m practicing more often is to keep something dead simple and then later adopt a library or some higher abstraction when necessary.
For example, just handling a websocket myself before later using some sort of library for it.
In the past my concern was that I’ll have to make breaking changes to some protocol or message structure. But I’m learning now that it’s actually a good thing: it makes me think about how I’ll evolve and grow without the answer being “just preemptively grow it so you don’t have to think about it.”
And naturally I’m discovering that the basic tools go far further than I expect and sometimes I never need to pay for that abstraction at all.
I find a lot of libraries don’t make the underlying systems easier to use per se, they just map a lot of use cases to configuration instead of code. I would rather just use the code hiding behind that static configuration.
After years and years of production experience with thousands of mobile robots that use this weird xml + yaml as configuration, I am 100% in the “just use code as configuration” camp.
Even with interpreted languages - if your configuration is in code you still need to wait for a code review, wait for CI to pass, and wait for the deploy. It can easily take over an hour to do a simple configuration change.
Depends how you deploy it. Python can import a file located anywhere you want, either by changing PYTHONPATH or using importlib. And vice versa an xml-config file might be placed in the repo and going through code review.
Code review for config changes is a good practice in either case, see gitops and config as code.
I found that sites (like fb messenger) which block URLs to certain sites can be easily bypassed by using HTTP Basic Auth with empty credentials. I built a small service (https://rot13.akhil.cc) that takes in a rot 13'd URL and redirects it to the original with HTTP Basic Auth. The nice part is that the credentials are cached, so visiting it again won't show the dialog.
This has many flaws that make it impractical beyond hobby projects or projects with a small set of users:
- It's trickier to throttle credential checks as every request is a login request, effectively
- It's hard to easily build account recovery flows or captchas into the login process.
- The UX of logging in/out is browser-dependent and confusing for users
- it can't integrate well with other auth systems
- Sending a password on every request requires care to avoid accidentally logging passwords
- Every request needs to do password validation, presenting additional load on the authentication systems/datastores
I wonder if there would be an audience for an HTTP Basic Auth 2.0 spec.
Some of your criticisms aren't exactly accurate though. "Sending a password on every request requires care to avoid accidentally logging passwords" applies to almost every login system ever made. Unless you use browser-based hashing with JavaScript, but that has significant known flaws and doesn't add much security.
And "Every request needs to do password validation, presenting additional load on the authentication systems/datastores" also applies to almost every login system ever made and HTTP Basic Auth doesn't make this better or worse.
>I wonder if there would be an audience for an HTTP Basic Auth 2.0 spec.
Yes! I remember in the early aughts, IE6 would present this cool login screen [0] for (what I think, but may be remembering incorrectly) HTTP Basic Auth. I always wanted to do that, but didn't really understand anything other than making basic HTML pages.
It could help improve security. It's a ubiquitous login screen that makes it really obvious which domain is requesting credentials - no need to check if the page looks off to detect possible phishing. Oh and you wouldn't run in to the issue of accidentally logging in on the sign up page!
I wouldn't bet on it improving security, personally.
If I was a hacker, I can use the User-Agent to know what OS they are using (or close enough). I also know what browser they are using.
I can use this information to create a custom webpage with a white background and similar imagery to look like the native browser form. If the user was unsuspecting, they might not realize it's not a separate window, and think that they were logging into the correct site.
> And "Every request needs to do password validation, presenting additional load on the authentication systems/datastores" also applies to almost every login system ever made and HTTP Basic Auth doesn't make this better or worse.
Not true. Once you get authenticated you can store that in a cookie with expiration, or any number of other ways to reduce load on auth services.
For 2, in many cases validating username/password requires hitting an external system. If you want to give your IT department a fun day, write a moderately popular internal web service that accidentally hits LDAP freshly for every single web request. It's really easy to accidentally do in some environments. Session cookie validation will typically only involve local resources.
For many of my internal tools I don't even bother with a database and just store it in local RAM, especially when I have no other database involvement, because having all sessions reset every few months during a reboot or restart is worth it to not have to stand up a database solely for that purpose. In that case it's just a quick lock&lookup in a map/dict/whatever your language favors to see if the token matches or not.
A non-trivial difference is that to validate password you need to run it through bcrypt or similar algorithm which is intentionally slow. With a token or cookie, you can use a regular fast hashing algorithm.
Just because something is simple doesn't make it impractical. Not every application requires a complex auth-flow.
>As every request is a login request, effectively
This is the case with basically every single Authentication flow in existence...at the end of the day, no matter how and where credentials are verified, there is a token that has to be sent with every request and verified server side.
>hard to easily build account recovery flows
Why?
> UX of logging in/out is browser-dependent and confusing for users
Its a display with username/password and one or 2 buttons, one of which says "Login", the other of which says "Cancel" or something similar. How is this confusing?
Completely agree. My points generally apply a lot more to large scale systems.
> every request is a login request
While it's true that every request has something being verified for authentication purposes, login is a higher risk activity: username/password can get harvested in lots of ways, while session cookies etc. generally are harder to steal, meaning there is less risk of an attacker being present with auth cookies vs seeing a username / password
> account recovery
The way I've seen browsers implement password auth generally blocks interacting with the rest of the page. While a normal login form might have a "forgot password / email" link, with basic auth the user is stuck with a modal that the web site owner has no control over and cannot build such affordances.
> integration with other auth systems
In general, other auth systems take a set of credentials and then issue a token that can be used for further authentication. Basic Auth's design is that the same credential is used for every request. I guess you could build a hybrid auth system that can accept either cookies/headers from an alternative system, or basic auth and just have some sort of rules for dealing with what happens if both are present, but at that point why not just use a normal login page if you're already dealing with support for session tokens of some sort?
> username/password can get harvested in lots of ways,
BasicAuth isn't more at risk of this than other methods however. Unless a website doesn't use HTTPS, but if that's the case, all talk about security is out the window anyway.
> The way I've seen browsers implement password auth generally blocks interacting with the rest of the page.
BasicAuth Challenge -> Wrong Password -> Server replies with 200 + "Did you forget your password klick here ..." page instead of 401. There, pwd recovery system implemented using BasicAuth.
My comment about username/password being harvested was talking about how someone's password can get stolen: you can have malware, password reuse across sites, phishing, or other social engineering. For session cookies, you have basically malware as the compromise vector. Hence, passwords should be treated with more suspicion by an authentication system.
They don't, though. Users don't reuse session cookies between sites, so another site compromised doesn't mean you have to worry about existing sessions being compromised on your site. Users also don't know their session cookies so are far less likely to go typing them in to a phishing site or hand them out over the phone. A password is vulnerable to all of these scenarios.
Probably true. I’m not suggesting it for your enterprise project. I’m just trying to remove personal roadblocks so I can test an MVP before I invest too much development time.
Yes, I did. Hence why I pointed out one of the characteristics that made it work fine in the scenario mentioned (that it has a relatively small set of users that can be trained), and many of the concerns I mentioned are not particularly applicable to the situation which the OP presented. I was making the point that the OP's success is not necessarily transferable to large scale applications.
Wouldn't you need to the substantial workload of PBKDF2 (et al) for each and every HTTP request? Maybe have the images, JS, CCS etc handled by a separate server, but there's still going to be many requests that need to be authenticated for.
Yes, if you use PBKDF2 for the hashing and you do it with every request.
In my template I’m using bcrypt and I’m doing it with every request. That might not scale well.
One commenter suggested storing a session token, probably one that expires, and not authenticating if that exists.
But, I’m using this for MVP’s to see if they’ll gain any kind of traction. Once a tool gets the kind of traction where this will be a problem it will be time to rethink the auth method.
Why would you need to index it by the clear-text password? You don't need to retrieve the clear-text password, it doesn't have to be stored in the cache at all.
The cache itself would be in the memory of a process which had access to cleartext passwords anyway, so presumably an exploit which allowed you to read the cache would be damning no matter what.
The parent commenter possibly meant: with an incoming request, sending a password in clear-text, how do you cache it? You hash it, then store the fact that authentication was valid with some expiry.
Once the next request, again with clear-text password, comes in you need to look up its validity. You want to check without hashing, that's the entire point. If you look up whether the hash is validly cached, you gained nothing. Hashing was required. To look up without hashing, you will need to use the clear-text password somehow. Hence it has to be part of the cache somehow, i.e. its index.
If that's all in memory in a memory-safe language, I guess it can be argued that's not unsafer than before, but I'm no expert.
> You want to check without hashing, that's the entire point.
It's not the point. There is no attempt to avoid hashing.
At this moment I'd like to be careful and distinguish between ordinary cryptographic hash functions (SHA-2, BLAKE2, etc) and key derivation functions (PBKDF2, Argon2, etc), even though both are often referred to as "hashing" in this context.
The thing about key derivation functions is that they make brute force attacks harder, because the computational cost of verifying the password is higher. They are typically parameterized so you can adjust verification cost. This is the cost we want to avoid.
However, just because you are not using your ordinary key derivation function with the parameters you normally use, it does not mean the only other option is to store passwords in plain text. You could use a key derivation function with different parameters, an HMAC with an ephemeral secret, or an ordinary cryptographic hash. These are all options, with different security tradeoffs.
Part of the evaluation of key derivation functions and their parameters is the evaluation of the risk that the hashed value is compromised. These values are stored in files or in databases, and there is a correspondingly higher risk of compromise... e.g. through backups. Your conclusions would be different for an in-memory cache which is harder to compromise and contains fewer entries, so you would naturally choose a different way of storing the passwords. Different algorithms or different parameters.
(There are also other interesting solutions around... like using an HSM to MAC the passwords.)
> At this moment I'd like to be careful and distinguish between
between a technicality (that I'm not even sure I agree with, hashing is still hashing also if you do it a million times) and what the people you're replying to were talking about. Sure, if you ignore what the topic was above then you can indeed say that nobody was trying to avoid doing hashing.
No idea what point you are trying to make here. Do you not agree with what I’m saying about hashing? Or do you just disagree with the way I said it?
You can cache password validation without storing the clear text password. That’s what I’m saying. If you don’t understand what a key stretching function is, the whole concept makes no sense, so I put a short explanation in.
The server will have a clear-text (leaving the base64 to one side) password in the HTTP request and a PBKDF2 (or similar) token loaded from the user database.
Unless you store the user's password in the clear inside your user database, but will have other security issues.
I don't know how you got to this being used for an app, but in most cases users will just be using a normal web browser and they will be requesting those resources (perhaps with etag/if-modified-since) on each request.
As a counterpoint, learning the basics of a good OAuth library or service like oauth2-proxy, auth0, next-auth or keycloak is well worth your time as a developer. Once you get over the initial learning curve, it’s almost as easy to add to a project as basic auth. You don’t need to reinvent the wheel with each project, building login pages and database code - other people have done the work for you - and you can create a more usable and professional experience for your users.
I like basic auth (Header of `Authorization: Basic username:password`) but for persistent session auth I prefer the Bearer tokens[1][2] (Header of `Authorization: Bearer my_token_here`).
It's easy to generate a hash after the user logs in and then just store it in a cookie. The user doesn't have to store their password locally and I can delete the token from the database to force them to re-login. You can reuse the same token generation infra for APIs too so its pretty dynamic.
I've basically moved away from JWTs in favor of this simpler approach.
i listen to a few cryptography podcasts and cryptography is an interest of mine[1]. most security engineers agree that JWT has too much of a surface area and takes a lot of care from the implementer that it can lead to security holes[2].
Another reason i like just generating a securiity token when the user logs in is that i can require a join on the table for any query so that its extra secure.
e.g. if fetching "posts" for a "user" and i have 3 tables (users, posts, and user_tokens) i can do an inner join like this (if the `posts` table has `user_id`):
select p.*
from posts p
inner join user_tokens ut using (user_id)
where p.user_id = $1 and ut.token = $2
-- `users` table wasn't needed for this join since am not selecting from it
with JWTs the security happens once when you verify the JWT signature, but then security is less of a focus when making queries, which would complicate queries anyway since you have to make sure the same user who owns the JWT has access to the data (it handles authentication but not authorization). You still need to hit the database anyway for fetching any data so the headless approach for JWTs isn't really saving me much, so I like to just bake in security for the queries.
[1]: I recommend new book "Real World Cryptography" by David Wong
But if you are rolling your own auth, you still need to create the signup, change password, reset password, confirm account, delete account, etc. pages. What's one more? Given that a logon form is >5% of the total amount of work to roll auth, seems kinda pointless to use this.
I definitely think doing openid logins is way easier than any home rolled auth scheme.
Delegate responsibility where you can, which includes use authentication. It sucks that Persona died all those years ago because web browsers really could use an identity system for users to authenticate themselves against sites with their browser accounts.
The problem is then ofc getting browsers to cooperatively allow cross sign in. If messengers are anything to go by, siloed products really do not want to interoperate, particularly I imagine Edge and Safari.
Implementing magic email sign in links is more straightforward and secure.
You implement a login route which takes an email address. You symmetrically encrypt the email with a secret key from an environment variable, and send a link to /login?secret=<email-ciphertext>. This route handler checks that the ciphertect decrypts into the email, and if true, save the ciphertext as a cookie and check that it decrypts to the right email every time you require auth
So anyone who compromised the cookie can login as this user forever? There’s no expiration or revocation in this protocol. Once you layer on expiration, this is basically sending someone a link with a JWT in the get request. Or you can hit the DB to check a secret key that’s in the email, and if it has expired, but this is worse than JWT because it requires a DB to verify the identity, where JWTs can be verified without interaction with the issuing system
.
This basically happened to me on a site, sans cookie. It sent notifications when customers left reviews. The convenient user experience is, when someone leaves a review, to forward the email with the review to the customer and ask how we can help.
After doing this a few times I realised it had small links in the footer to login and administrate without a password, using tokenised links. Support had no way to expire these. I needed to create and migrate a new account (which incidentally ended up emailing hundreds of people for reviews they left months and years ago).
Oh yeah of course you must do all those things! Assumed it was obvious that this was only about the steps leading up to giving the JWT token. Wrote it with one hand on my phone :-)
I recently re-rolled a password auth protocol because Auth0 and AWS Cognito were just so goddamn complicated. Ages ago I used stormpath because it was so simple,
I realize there are many options today for federated logins, 2FA, SMS / phone password resets ... i just wanted an old-school password system for my dumbass personal site.
I like the reminder that this is out there and mostly available.
I think many of the comments here are missing the point - they're saying it's useful for small-time projects with one or a few users and not needing to be integrated into a sophisticated infrastructure. No need to worry too much about hashing, logouts, the full chain of account management, etc. Use a full-featured solution if you need that, keep Basic Auth for a thing with a few admin pages where anything unusual gets handled over SSH.
This also reminds me - Gemini uses client certs for a similar purpose. Gemini doesn't seem to have much going on, but it does make a good case for using client certs better. Right now, they're technically supported, but the UI for both browsers and server support is really clunky. Build a decent UI on both sides, and it could be a nice simple solution for higher-security authentication.
My big problem with HTTP basic auth in 2022 (squints at Netgear router) is that in Edge/Chrome 1Password can’t auto complete it. It gets really annoying really quickly.
That's actually a gripe I have with Bitwarden, because you can't turn that feature off. If an attacker can take over a single endpoint, Bitwarden will happily send your credentials to an iframe from a malvertiser without ever telling you.
It's a fine feature and the WebExtension API won't let them solve basic auth in any other way, but it's a security risk in my opinion. I'd much rather see browsers provide an API to HTTP Basic auth prompts instead so the user can select an identity from the list if they've got a saved username/password combo that matches a given set of requirements.
You can use the one proxy for entire subdomains / unlimited number of apps. No plaintext passwords, scales well, open source, industry standard, and you get SSO for free. Hell, you don't even have to manage accounts! It makes your life simpler and it's more secure. You can't say that often.
the reasons you provide for using it are really strong, and I think it's a much superior option to prototyping leveraging oauth 3rd parties (which has tons of downsides).
It would be good to have a hard rule as to when you migrate, and what the reasons are (if a project has more than N users, if you store field X which is pretty sensitive, etc..)
Going this route says to the consulting client: this is NOT the full solution. This is only work-in-progress. If you treat this as the final delivery and deploy this to prod, you're making a big mistake.
HTTP basic auth RFC is 14 pages. The OAuth2 RFC is 76 pages, and that's only the framework. Perhaps we have a different understanding of what "simple" means...
I have always wondered how feasible it would be to use the basic auth URL syntax for thinks like API tokens and email validation. Ideally these values would not be logged but if you want a user to click them they basically need to be in the URL. So either you don't log URLs or you need to redact all of these parameters. Most systems are smart enough to avoid logging passwords so if you could just use URLs like randomtoken@example.com/confirm-email. However it seems like browsers usually put up scary warnings and most email providers will consider this a spammy/scammy pattern.
Of course another problem here is that you just want it for one URL, whereas basic auth is usually preserved for the session.
The problem is not with small projects. Who cares if it gets hacked. The problem is with real life SSO. If you used a 100 apps and 1 gets hacked then you have to change the other 99 passwords. Sound bad right? That is why you should not use basic auth and use openid connect or similar tech.
Sometimes it is a bank, and no SSO involved. It is not well understood but SSL has termination points. E.g. the gateway and each gateway all thru the last layers. SSL becomes vulnerable at these termination points. If basic auth is used, then you just lost a whole lot of cash. Otherwise use challenge response, encrypted passwords within the SSL, and all kind of out of this world tech for risk assessment. Basic auth? Really?
HTTP Basic Auth was deprecated a long time ago, back when HTTPS was a very expensive exotic thing that only large sites used. I'm not sure it would be worth deprecating freshly today if it came up, now that anyone who wants HTTPS can get it easily. It may not be the perfect solution to everything but it does fill in a nice space, even in its current form. If browsers would just pay a bit more attention to it it could become very nice; most of its worst quirks are technically in the browsers, not the protocol.
Basic Auth has become increasingly difficult to use as a generic way to protect a web app. Many apps these days have endpoints that respond with 401 when a user isn't logged in, which if you're fronting the application with basic auth, results in the user being "logged out" as the browser thinks the credentials are no longer valid.
The hijacking of HTTP status codes by client-side apps wanting to interpret them in their own way makes me think we need a new range of codes for user-defined statuses.
With Firefox, from the Preferences, select only "Active logins" in the Clear History dialog [1]. Very odd that it belongs to history and far from ideal. However it's not something a lot of people use nowadays so I'm not surprised that they tucked it under the carpet.
I use Basic Authentication for a few web services running on a server of mine and that I'm the only user of. It saves me a lot of work. I set the user and password in a file on the server and let nginx deal with it.
Last time I used HTTP basic auth, we've had a page which would unconditionally send 401 error code -- it'd invalidate credentials cache (effectively logging out the user) and then pop up new credentials dialog. I am not sure how standard-complaint this was, but it seemed to work in most browsers.
One downside is that in trivial implementation (/logout page returning 401), you'll end up with credential prompt where no credentials work, and this can be pretty confusing if someone leaves the compute in that state. I think it can be worked around with "expiration" URL parameter or a cookie, but this is somewhat finicky to setup.
While your point is correct: It's not design with a logout in mind, but you can do it. For instance clicking link that will send invalid credentials, that will return a 401 and work like a logout.
Mostly I just close my browser, that clear everything in my setup.
People could have many open tabs with work in them. Maybe they would have to login again in some of them. Other could come back in a different state, especially with SPA.
Great question. This is a downside, there is no built-in method, but there are some tricks. If I remember correctly you can send headers with incorrect credentials. I’ll have to do some research and add that to my template.
If I remember this correctly, you send a request with incorrect credentials either via javascript, or by sending the user to a "/logout"-endpoint where you disallow any authentication credentials, causing the browser to prompt the user for a new username and password. This not great in terms of UX and depending on something implicit for something as important as logging out does not seem great for security.
Why browsers do not have better functionality built in for authentication is something that's always been a bit baffling...
I still use basic auth as a quick and dirty way to protect a link when emailing files to friends that are still on gmail. It keeps the Google bots off my files.
FYI Python can server http basic auth straight out of the box!
I wrote this tiny wrapper Sauth[1] that has been a real life saver for delivering minor WIP stuff to clients who are not in the whole cloud ecosystem - just type in url and put in your credentials to see the latest files.
I used to use Basic Auth for small personal/internal services, but these days I prefer a proxy like Traefik/NGINX with a simple authentication service like Authelia or Traefik Forward Auth.
It’s a little more work to setup but not much and there are a bunch of benefits: SSO across multiple services, control over session expiration, works better with password managers, 2FA, etc.
It seams firefox caches bad basic auth creds. It is so annoying because there are several sites where I can only do a basic auth login from a private window. If I go there in a regular window I just get the invalid authentication error, and no option to enter a new username/password. It isn't clear where this is cached either, so it is very difficult to clear that cache.
I don't recall seeing a lot of good info on how to use HTTP Basic Auth back in the day, to the extent that in 1998 I wrote my first technical how-to to post on my own website. It still lives on here:
This is great for small projects and services that have just a few users and an easy way to change passwords and create/delete accounts directly in the backend. It’s secure (although you must use HTTPS) and is very easy and fast to use, especially if you have a password manager that automatically fills out the login form.
I just use it once, to generate an API key, then use the key, and a server secret (in the basic auth header), after that. The key could be generated by something like Sign In With Apple.
Not Fort Knox, but fairly good.
Because of FastCGI, I have to have an option to pass login creds in the URL, but that is not by default.
What is authentication for? What is it protecting?
Sometimes the effort and expense protecting a thing is more than the value of the thing protected. Often there is nothing really to protect, "authentication" is just administrative convenience.
The primary flaw isn't BASIC AUTH. It's the password itself. Brute force attacks are easy. The only good way of securing anything is through an Authenticator app.
You can still have the server trigger an authentication confirmation when using basic auth. Web servers not coming with authentication apps is not a flaw, it is a completely separate component and scope.
You can use a slow validation function (bcrypt) to naturally rate limit login attempts, or log login attempts somewhere and do it based on that in the code layer. Most login schemes will have at least a basic database with a users table so it isn't much extra effort to check during validation.
I don't know if it really holds up to scrutiny, but I use http basic auth for a semi-public website. It has members (like less than twenty) that I don't know personally other than through the website, but I don't want the website public because I don't want to deal with with TOC, GDPR, legal stuff in general. Once you're past basic auth, it actually does have a login/registration flow, but it seems like I'd need to see a lawyer if I ever wanted to make a member-driven website public and I just haven't gotten around to it yet, so... http basic auth. Is this sound thinking? I don't know.
The GDPR/Cookie rules are just the same common courtesy that all projects I've been invovled with always has had. No ad trackers and no personal data stored about what I did on the site unless necessary for the function provided. There is no exemption for personal sites. Point being, I'm from the EU but some times that has not always been important and only come up after working with someone online a couple of years.
Basically; HTTP Basic auth does not make you website anymore private than a login page. Just treat you users fair and be straight forward with what you do with the data, and you won't need any lawyers.
* Add a button to log out. Logout never really worked across browsers with basic auth.
* Allow to inject a logo or a tiny bit of customization for branding. The default popup looks too ugly.
* Improve the Digest auth to modern crypto standards. Stop passing plain passwords over the wire.
That's it. It's not a massive amount of work and it could enable to simplify a lot of the Internet.