Hacker News new | past | comments | ask | show | jobs | submit login

Server information leakage is generally viewed as a security vulnerability, eg by scanning tools utilizing the OWASP guidelines, because it assists in server fingerprinting.

https://www.ibm.com/docs/en/control-desk/7.6.1.x?topic=check...

https://owasp.org/www-project-web-security-testing-guide/lat...

The general problem is that if you just go and blab about what software you're running, if you ever have a version with a known vulnerability (or a zero-day) then the attacker immediately knows what payload to run against you, in fact you can potentially be programmatically attacked from Shodan/etc.

Obviously it's better to simply never run any insecure software, but forcing them to run a search for payloads against you will hopefully create an opportunity for them to trip alarms and generally increase the attack time.

I am not super duper concerned about leaking what the gateway app is in general, as a sibling mentions it's pretty unavoidable that an attacker is going to fingerprint a server if they really want to, I generally agree (without knowing which is which) that there are clearly a couple of them that have distinctive UI looks lol.

On the other hand, leaking version info is probably a bridge too far in my opinion - if you do ever end up with a vulnerable JVM/library/appserver/gateway, now it can be programmatically identified by an attacker. Things like log4shell or the Jakarta Struts vuln have a long long tail of shit that's gonna be vulnerable forever.

It's not the end of the world, but honestly I really, really dislike the way "security by obscurity!" tends to be used (more often than not) as a thought-terminating cliche. The line is supposed to be that "obscurity as the only method is not security" and that's true, but, obscurity is generally an important part of defense-in-depth. There's no reason to hand an adversary more information about the system than necessary, them blundering around your obscurity and triggering security alerts for patched attack payloads/etc or weird errors that don't normally crop up increases the chance of detection.

People treat it as "anything that increases obscurity is a bad thing" and no, actually that's generally a good thing (as OWASP acknowledges). Just not if your system is designed such that it doesn't work if obscurity fails. Building a secure system and then adding obscurity is a net increase over a secure system without the obscurity.

Should your application be secure if you hand them a classfile list? Sure. Is it a good idea to actually hand them one? No. Same for network maps/IP ranges/etc. Is it something they could figure out eventually? Sure, but make them work for it, and hope they try to connect somewhere that isn't allowed for that container/VLAN and alarms go off. Security and obscurity are two great tastes that go great together, because obscurity increase the chances that an attacker trip alarms in a way that catches attention.




Wait, what server information leakage are we talking about? I didn't think keycloak leaked `x-powered-by`, and there's discussion[1] in their repo that shows they understand the concern. All software can be fingerprinted (if not then it has no user visible behavioral differences). Making it trivial to fingerprint a server isn't a good idea, but avoiding it entirely doesn't make sense.

Was there a specific trivial information leakage you were worried about?

[1] https://github.com/keycloak/keycloak/pull/5293




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: