Hacker News new | past | comments | ask | show | jobs | submit login

Our view is that localhost:2019 is inherently protected though - that only allows requests from the same machine. If you're running the machine with shared users, then of course it's up to you to further secure things.

That said, see https://github.com/caddyserver/caddy/issues/5317, we are considering changing the default, but that would be a breaking change although it would likely be a transparent change for most users. The default that makes the most sense depends on the platform and installation method, which is why it's complicated.




SSRF is a thing, just because you trust the code doesn't mean you've eliminated all security risks. The tools we use in the industry should all have secure defaults.

Glad to hear you're considering changing the default!

It would be enough for me if Caddy generated a password (that's hard for attackers to predict) on first launch, set that in a config file it has write access to (autosave.json for example), and then required Basic auth using this password unless the configuration specified otherwise. My problem is that this endpoint is entirely unauthenticated.


Have you demonstrated SSRF on a server that is not running insecure or untrusted code (i.e. is not already compromised)?

We have yet to see this, but if we do see a practical demonstration, we're happy to reconsider.


You've never seen an application with an SSRF vulnerability? I've encountered multiple working as a penetration tester.


This is a horrible approach to security. Bad. Bad. Bad.


Making decisions based on demonstration of practical attacks is "bad bad bad"?


Assuming that everyone on local host should have admin access to the web server is an extremely bad default, yes. Listening on ports that the user has not requested is bad enough but doing so without any authentication on that interface is bonkers.


What is the problem here, that is any different from allowing untrusted code on your machine? Yes, if you don't trust your other users you need to lock down your system. That's not new or unique to Caddy, and indeed you can lock down Caddy too. Once the machine is popped locally it's already game over, authentication can't prevent that.


Large enterprise codebases will almost always have some security vulnerabilities in them but are still considered trusted code. Caddy is a fine alternative for ingress to such applications if you ask me, maybe a little on the immature side, but it's certainly getting there. An admin might absolutely pick Caddy for ingress in order to keep config simple, writing a Caddyfile, and ending up with the aforementioned admin endpoint enabled by mistake.


Defense works in layers. Punching through one of those layers makes securty worse even if that layer was not the first or the primary defense.

The problem here is that a priviledged process (and yes, a process that has access to Port 80 and 443 is priviledged even if it doesn't run as root) gives unauthenticated control to less priviledged processes.

With good security design you don't lock things down as needed but instead start fully locked down and open up only the accesss you really need.


> With good security design you don't lock things down as needed but instead start fully locked down and open up only the accesss you really need.

This. @mholt what we're arguing for is just to have secure defaults.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: