Hacker News new | past | comments | ask | show | jobs | submit login

Agree with the article.

People have been misinterpreting "security by obscurity is bad" to mean any obscurity and obfuscation is bad. Instead it was originally meant as "if your only security is obscurity, it's bad".

Many serious real-world scenarios do use obscurity as an additional layer. If only because sometimes, you know that a dedicated attacker will be able to breach, what you are looking for is to delay them as much as possible, and make a successful attack take enough time that it's not relevant anymore when it's broken.




In nature, prey animals will sometimes jump when they spot a predator[1]. One of the explanations is that this is the animal communicating to the predator that it is a healthy prey animal that would be hard to catch and therefore the predator should choose to chase someone else.

I think we can kind of view obscurity in the same way. It's a way to signal to a predator that we're a hard target and that they should give up.

Of course in the age of automation, relying on obscurity alone is foolish because once someone has automated an attack that defeats the obscurity, then it is little or no effort for an attacker to bypass it.

Of course, sprinkling a little bit of obscurity on top of a good security solution might provide an incentive for attackers to go someplace else. And I can't help but think of the guy who was trying to think of ways to perform psychological attacks against reverse engineerers [2].

[1] - https://en.wikipedia.org/wiki/Stotting

[2] - https://www.youtube.com/watch?v=HlUe0TUHOIc


>I think we can kind of view obscurity in the same way. It's a way to signal to a predator that we're a hard target and that they should give up.

This has it completely backwards. Security through obscurity's goal is not to signal predators, it's the opposite. The goal is to obscure, to hide. The "signal" is there is nothing here (or nothing here worth your time). One of the best examples (it's in the article!) is changing the default SSH port. Just by obscuring your port you can usually filter out the majority of break-in attempts.

The only way security through obscurity signals to "predators" is if they've seen past your defence, and thus defeated the obscurity. Obscurity (once revealed) is not a deterrent. Likewise an authentication method (once exploited) is not a deterrent.

>Of course in the age of automation, relying on obscurity alone is foolish because once someone has automated an attack that defeats the obscurity, then it is little or no effort for an attacker to bypass it.

This is true of any exploit basically. Look no further than metasploit. Another example: a worm is a self-automating exploit.


> This has it completely backwards. Security through obscurity's goal is not to signal predators, it's the opposite. The goal is to obscure, to hide. The "signal" is there is nothing here (or nothing here worth your time).

Most of the usages of "security through obscurity" that I've seen dissected and decried haven't been in the sense that something was being hidden, but rather that something was being confused. For example, using base 64 encoding instead of encrypting something. Or running a code obfuscator on source code instead of making the code actually secure.

Either way the economic costs that I'm talking about are valid. If an attacker sees that your SSH port isn't where it's supposed to be OR if an attacker sees that your SSH port ignores all packets sent to it (unless you first send a packet thats 25 0xFF bytes), then either way they're being signaled that you are more trouble than the computer that has an open telnet port.

There are slightly different usages of the same word, but the effect looks to me to be the same. More investigation or automation can make the obscurity go away, but it does make things a bit harder.


Fair point! Obscurity as confusion is not what I had in mind, but your points on confusion are totally valid. Your analogy with predators works better here.

Using base64 encoding, or encrypting your database, are both examples in the article. While I agree base64 is super trivial, the point about either of these is defence in depth. In the language of the article, it's reducing likelihood of being compromised.

>If an attacker sees that your SSH port isn't where it's supposed to be OR if an attacker sees that your SSH port ignores all packets sent to it (unless you first send a packet thats 25 0xFF bytes), then either way they're being signaled that you are more trouble than the computer that has an open telnet port.

This is semantics. Personally I'd say if an attacker cannot sense anything to connect to, there is no "signal" you're sending. You're rather not sending a signal that you're a threat, as you're not sending a signal at all due to being functionally invisible. Otherwise, we could say literal nothingness is sending the same signal that your server is. We agree on the substance here, i.e. the obscurity increases the economic cost of hacking and works as a disincentive, so we may just agree to disagree on the semantics.


There is supposed to be a response when a port is closed telling you the machine is online but not listening to that port. https://en.wikipedia.org/wiki/Port_scanner


most people have firewalls configured to simply drop traffic not destined for open ports, in which case there is no response as the traffic never makes it beyond the firewall.


If you'd like to be very visible in a different way, you could always waste resources:

1. Endlessh: https://news.ycombinator.com/item?id=19465967

2. Tarbit: https://github.com/nhh/tarbit


“Security through obscurity” means something like e.g. “uses a bespoke unpublished crypto algorithm, in the hopes that nobody has put in the effort to exploit it yet.”

Usually this is a poor choice vs. going with the published industry standard, because crypto is hard to get right, and people rolling their own implementations usually screw it up, making life much easier for dedicated attackers than trying to attack something that people have been trying and failing to breach for years or decades.

Software makers for example typically don’t publish the technical details of their anti-piracy code. But this usually doesn’t prevent software that people care about from being “cracked” quickly after release.


Banking Software uses all sorts of security through obscurity. Infact Unisys used to make custom 48bit CPUs for their clearpath OS to make targeting the hardware very difficult without inside knowledge of the chip architecture.


You are making the same argument this article is trying to explain to you. Security by Obscurity is not bad because _on its own_ its not enough, its good because coupled with other layers it adds security. I have been told to remove security by obscurity layers from systems by people that don't grok this. Security was, in a few cases, reduced to nothing. Systems that have one industry standard approach only laid totally open on the Internet due to a single misconfiguration or a single CVE being published. Any other layer would have helped, however "insecure", but they were removed due to the misconception that the layers themselves were "insecure". I would go so far as to say the first layer should always be security by obscurity for any unique system. If you fire up a web server and have the first security requirement that each http request must have the header X-Wibble:wobble I promise you this layer of security will be working hard all day long. Cheap, impossible to get wrong, it's not sufficient but it works.


Using a non-standard SSH port is a bad example because nmap can see through that deception in a few seconds. Any attacker who is looking for more than just the lowest of low-hanging fruit will not be even slightly deterred.

A better example would be a port-knocking arrangement that hides sshd except from systems that probe a sequence of ports in a specific way. This is very much security by obscurity, because it's trivial for anyone who knows the port sequence to defeat, but it's also very effective as anyone who doesn't know the port sequence has no indication of how to start probing for a solution.


> Using a non-standard SSH port is a bad example because nmap can see through that deception in a few seconds.

Compared to milliseconds. Do yourself the favor and open one sshd on port 22 vs one on a port >10000, then compare logs after a month. The 22 one will have thousands of attempts; the other one hardly tens if even any.

The 99% level we're defending against here is root:123456 or pi:raspberry on port 22. Which is dead easy to scan the whole IPv4 space for. 65K ports per host though? That's taking time and, given the obvious success rate of the former, is not worth it.

Therefore I'd say it's the perfect example: It's hardly any effort, for neither attacker nor defender, and yet works perfectly fine for nearly all cases you'll ever encounter.

EDIT: Note that it comes with other trade-offs, though, as pointed out here: https://news.ycombinator.com/item?id=24445678


I know we've spoken in another thread, but I think it's important for people to understand that this sshd thing is a perfect example of why it isn't this easy: You reduce log spam moving to a non-privileged port, but also reduce overall security - a non-privileged user can bind to a port above 10k, but can't bind to 22. sshd restarts for an upgrade, or your iptables rules remapping a high port to 22 get flushed, that non-privileged user that got access via a RCE on your web application can now set up their own fake sshd and listen in to whatever you are sending if it manages to bind to that port first and you ignore the host key mismatch error on the client side.

Or you can implement real security, like not allowing SSH access via the public internet at all and not have to make this trade off.


Here's a counter-example (I said else-where in this thread):

Imagine a 0day for SSH drops tomorrow. Almost immediately script kiddies all over the world will be trying to take over everything running on port 22.

I'll also point out that we're generally talking about different threat vectors here, so it's good to lay them out. I don't think obscurity helps against a persistent threat probing your network, it helps against swarms.

> a non-privileged user can bind to a port above 10k, but can't bind to 22. sshd restarts for an upgrade, or your iptables rules remapping a high port to 22 get flushed, that non-privileged user that got access via a RCE on your web application can now set up their own fake sshd and listen in to whatever you are sending if it manages to bind to that port first and you ignore the host key mismatch error on the client side.

This is getting closer to APT territory, but I'll bite. If someone has RCE on your SSH server it honestly doesn't matter what port you're running on. They already have the server. You're completely right it would work if you have separate linux users for SSH and web server. Unfortunately that's all too rare in most web-servers I see (<10%), as most just add SSH and secure it and call it a day (even worse when CI/CD scripts just copy files without chowning them). But let's assume it here. In reality, even if you did have this setup this is a skilled persistent threat we're talking about (not quite an APT, but definitely a PT). They already own your website. Your compromised web/SSH server is being monitored by a skilled hacker, it's inevitable they'll escalate privileges. If they're smart enough to put in fake SSH daemons, they're smart enough to figure something else out. Is your server perfectly patched? Has anyone in your organization re-used passwords on your website and gmail?

You're right that these events could happen. But you have to ask yourself what's actions of yours will have a bigger impact:

* Changing to non-standard SSH port, blocking out ~50% of all automated hacking attempts. Or port-knocking to get >90% (just a guess!). * Use standard port, but you still have an APT who owns your web server and will find other exploits.


>Imagine a 0day for SSH drops tomorrow. Almost immediately script kiddies all over the world will be trying to take over everything running on port 22.

Yep! And I should be clear: I am not saying just don't change the SSH port. I'm saying if you care about security, at a minimum disallow public access to SSH and set up a VPN at a minimum.

>Unfortunately that's all too rare in most web-servers I see (<10%), as most just add SSH and secure it and call it a day (even worse when CI/CD scripts just copy files without chowning them).

I'm a bit confused here. In every major distro I've worked on (RHEL/Cent, Ubuntu, Debian, SUSE) the default httpd and nginx packages are all configured to use their own user for the running service. I haven't seen a system where httpd or nginx are running as root in over a decade.

I think the bare minimum for anyone that is running a business or keeping customer/end user data should be the following:

1) Only allow public access to the public facing services. All other ports should be firewalled off or not listening at all on the public interface

2) Public facing services should not be running as root (I'm terrified that you've not seen this to be the case in the majority of places!)

3) Access to the secure side should only be available via VPN.

4) SSH is only available via key access and not password.

5) 2FA is required

I think the following are also good practices to follow and are not inherently high complexity with the tooling we have available today:

1) SSH access from the VPN is only allowed to jumpboxes

2) These jumpboxes are recycled on a frequent basis from a known good image

3) There is auditing in place for all SSH access to these jumpboxes

4) SSH between production hosts (e.g. webserver 1 to appserver 1 or webserver 2) is disabled and will result in an alarm

With the first set, you take care of the overwhelming majority of both swarms and persistent threats. The second set will take care of basically everyone except an APT. The first set you can roll out in an afternoon.

With the first set, you take care of the overwhelming majority of situations.


Protecting sshd behind a VPN just moves your 0day risk from sshd to the VPN server.

Choosing between exposing sshd or a VPN server is just a bet on which of these services is most at risk of a 0day.

If you need to defend against 0days then you need to do things like leveraging AppArmor/Selinux, complex port knocking, and/or restricting VPN/SSH access only to whitelisted IP blocks.


Except you don't assume that just because someone is on the VPN you're secure.

If the VPN server has a 0day, they now have... only as much access as they had before when things were public facing. You still need there to be a simultaneous sshd 0day.

I'll take my chances on there being a 0day for wireguard at the same time there's a 0day for sshd.

(I do also use selinux and think that you should for reasons far beyond just ssh security)


A remote code execution 0day in your VPN server doesn't give an attacker an unauthorized VPN connection, it gives them remote code execution inside the VPN server process, which gives the attacker whatever access rights the VPN server has on the host. At this point, connecting to sshd is irrelevant.

Worse, since Wireguard runs in kernel space, if there's an RCE 0day in Wireguard, an attacker would be able to execute hostile code within the kernel.

One remote code exploit in a public-facing service is all it takes for an attacker to get a foothold.


I do not run my VPNs on the same systems I am running other services on, so an RCE at most compromises the VPN concentrator and does not inherently give them access to other systems. Access to SSH on production systems is only available through a jumphost which has auditing of all logins sent to another system, and requires 2FA. There are some other services accessible via VPN, but those also require auth and 2FA.

If you are running them all on the same system, then yes, that is a risk.


For a non-expert individual who would like to replace commercial cloud storage with a self hosted server such as a NAS, do all these steps apply equally?

I am limiting the services to simple storage.

Looks like maintaining a secure self cloud requires knowledge, effort and continuous monitoring and vigilance.


Most of those are good practices for a substantial cloud of servers that are already expected to have sophisticated configuration management. They're easy to set up in that situation, and a good idea too because large clouds of servers are an attractive target - they may be expected to have lots of private data that an attacker might want to steal and lots of resources to be exploited.

A single server run by an individual and serving minimal traffic would have different requirements. It's a much less attractive target, and much harder to do most of those things. For example, it's always easy and a good idea to run SSH with root login and password authentication disabled, run services on non-root accounts with minimum required permissions, and not allow things to listen on public interfaces that shouldn't be. Setting up VPNs, jumpboxes, 2FA, etc is kind of pointless on that kind of setup.


>Imagine a 0day for SSH drops tomorrow. Almost immediately script kiddies all over the world will be trying to take over everything running on port 22.

But how much of a threat is this? Who's going to drop a ssh 0day with PoC for script kiddies to use? If it's a bad guy he's going to sell it on the black market for $$$. If it's a bad guy he's going to responsibly disclose.

>You're right that these events could happen. But you have to ask yourself what's actions of yours will have a bigger impact:

>* Changing to non-standard SSH port, blocking out ~50% of all automated hacking attempts. Or port-knocking to get >90% (just a guess!).

But blocking 50% of the hacking attempts don't make you 50% more secure, or even 1% more secure. You're blocking 50% of the bottom of the barrel when it comes to effort, so having a reasonably secure password (ie. not on a wordlist) or using public key authentication would already stop them.


It makes the logs less noisy. And with much less noisy logs it is easier to notice if something undesirable is happening. Also from my experience this 50% is more like 99%.


> Unfortunately that's all too rare in most web-servers I see (<10%), as most just add SSH and secure it and call it a day (even worse when CI/CD scripts just copy files without chowning them).

If you made a list of things like this which annoy you, I would enjoy reading it.


> Imagine a 0day for SSH drops tomorrow. Almost immediately script kiddies all over the world will be trying to take over everything running on port 22.

And with all those compromised servers they could easily scan for sshd on all ports.


Yeah, I'm seeing a lot of nonsense in here. Why is SSH publicly accessible in the first place???

Security through obscurity is just some feel good bullshit.


Why shouldn't SSH be public? It is useful and simple to secure.


Well, there's basically two stances you can reasonably take:

1) SSH is secure enough just by using key based auth to not worry about it.

2) SSH isn't secure enough just by using key based auth so we need to do more stuff.

If you believe #1, then you don't need to do anything else. If you believe #2, then you should be doing the things that provide the most effective security.

Personally, I believe #1 is probably correct, but when it comes to any system that contains data for users other than myself, or for anything related to a company, I should not make that bet and should instead follow #2 and implement proper security for that eventuality.

I'm willing to risk my own shit when it comes to #1, but not other people's.


Fair enough, I've edited the comment to reflect this :)


> The 22 one will have thousands of attempts

The range in the figures is surprising. I leave everything on port 22, except at home where due to NAT one system is on port 21.

On these systems, since 1 September:

  lastb | grep Sep\  | wc -l

  160,000 requests  (academic IP range 1),
  120,000 requests  (academic IP range 2),
    1,500 requests¹ (academic IP range 3),
    1,700 requests² (academic IP range 3),
  180,000 requests³ (academic IP range 3, just the next IP),
   80,000 requests  (home broadband),
   14,000 requests  (home broadband ­— port 21),
    5,000 requests  (different home broadband, IPv4 port)
        0 requests  (           ,,     ,,      IPv6 port)
¹²³ is odd. All three run webservers, ² also runs a mailserver, yet they have sequential IP addresses.

I don't bother with port knocking or non-standard ports to ensure I have access from everywhere, to avoid additional configuration, and because I don't really see the point when an SSH key is required (password access is disabled).


Good example, but doesn't help his point, which was:

> This has it completely backwards. Security through obscurity's goal is not to signal predators, it's the opposite. The goal is to obscure, to hide. The "signal" is there is nothing here

An attacker scanning the whole IPv4 space won't think "ah, there's no ssh on port 22, there's no ssh to attack". They will think "yep, they did at least the bare minimum to secure their server, let's move on to easier targets".

He proved the point he was trying to disprove.


So I get 10s of attempts a day for my sshd on port 7xxx. If I had an account with say ubuntu:ubuntu, it'd totally have been found by now.


I have 0 in the last 14 days on port 2xxx. Probably depends a lot on your IP range (I'd assume AWS etc is scanned more thoroughly) and whether you've happened to hit a port used by another service. But even in commercial ranges, I've seen hardly any hits on >10k.

But I have only anecdotal evidence as well, so my guess is as good as yours.


So scale the other aide up too. Just imagine what it's like on 22.


The article addresses this. He did a poll, and just under 50% of people use the default ports. So just by changing your default port, you eliminate half the break-in attempts.

Now you're absolutely right that this only deters less-skilled/inept hackers, a more competent hacker easily gets past this. But it's worth dwelling on the fact that we still stopped a substantial number of requests. Port knocking is definitely an improvement (i.e. more obscure). I'd guess with port-knocking more than 90% (even 99%) of attempts would completely miss it. The goal here isn't to rely completely on obscurity. It's security in depth. Your SSH server should still be secure and locked down.

The other question with this is what's your threat vector. Most people decry security through obscurity because an APT can easily bypass it. They can, but most people trying to hack you are script kiddies. Imagine an SSH exploit was leaked in the wild – all the script kiddies would be hammering everything on port 22 immediately.


The poll is my biggest issue with an otherwise agreeable article, the sample size and representation on Twitter doesn't make for anything close to reliable percentages.

I understand its use as a demonstrative aid but especially in the context of security, hinging your policies on the outcome of a Twitter poll seems like... well, security through obscurity.


Maybe a bit nitpicky but I think port-knocking is in kind of a grey area. You can think of it as a kind of password where you have to know the correct series of ports. Since the number of ports is quite large, there is also a correspondingly large number of possible port sequences so you can't, in principle, brute force it without a lot of effort.


> Maybe a bit nitpicky but I think port-knocking is in kind of a grey area. You can think of it as a kind of password where you have to know the correct series of ports.

Yes.

But you also have to know that port knocking is enabled at all. That's the obscurity part.


Eh, I think its how others have expressed in this thread.

Security by obscurity is bad, not because it is bad to have well secured countermeasures, but because it encourages poor thinking with regards to methods you have in place, and additionally because it usually introduces extra, unintended attack vectors.

You suffer from your own 'obscurity' - whether its because you forgot you had to port knock, use a different port, or because you somehow managed to leave a new exploit in some obscured code due a bug in the obscured code, or because you managed to open yourself up to an RCE with the port knocking, or some other obscure scenario you did not intend to create from whatever obscurity you created.

I think this is different from Defense in Depth, which just says have more than one countermeasure in place, and to keep the counter measures 'separate', but well defined. Port knocking, but on a different box than your vpn box, on a different box than your ssh box.

We aren't told to use 'well-defined' passwords, like 1234, obviously. If the point of 'obscurity' is 'secrets', that's all well and good, but that's not security, that's a password. Have a password, by all means, but tunnel it over TLS and use well defined security paths versus creating unnecessary risks.


I think this implementation avoids that problem.

https://github.com/moxie0/knockknock


Gotcha. Fair point.


Less secure than a password because anyone sitting in the middle can observe the sequence.


> [port-knocking] is very much security by obscurity, because it's trivial for anyone who knows the port sequence to defeat

in what way is this different than a passphrase you don't know? i can trivially defeat any password which i already know, too :D

while discovering a non-standard ssh port is easy, discovering a port-knock sequence out of a possible ~65k per knock is impractically difficult (assuming the server has any kind of minimal rate limiting). a sequence of eight knocks will need 65k^8 attempts - and that's assuming you already know which port will be opened, which of course you won't.

you can even rely on just port-knocking of 8 ports and already get ~2^48 bits of entropy, which is about the same strength as a random 8 char alpha-numeric latin-charset password.

(someone plz check my math)


I agree with you that the example is not the best, but obscurity has a lot of benefits. We did an experiment with a few students on obscuring a WordPress installation some years ago to catch ppl scanning for certain plugins. That gave us the ability to use the regular paths as honeypots. Gives you an ability to detect 0-day attacks as well.


Changing the ssh port would still fall under security through obscurity whether its effective or not.


This doesn't consider that you can then monitor connections to port 22, and if you see any that's suspicious.

Disagree that port knocking is obscurity. That's a secret.

Security through obscurity would be using a nonstandard SSHD service.


I just turn off password authentication on SSH and moved to keys, then moved to IPv6. The automated scans haven't made it to v6 yet. The only better thing I could do is have an external v4 SSH honeypot that moves as slowly as possible to tie up a (tiny) resource.


IPv6 seems to be a good example of security by obscurity, with up to 64 bits of random IP adresses per machine, making scanning impossible in practice ?


Endlessh and tarbit do this


You can’t compromise what you can’t see.

Calling an administrator account “9834753” obscures it’s purpose and may reduce the likelihood of a compromise attempt as opposed to “databasesuperadmin”. But that doesn’t mean that you don’t need a good security token.


I find the SSH example slightly odd, when all you need to do is disable password authentication and root access. Moving away from port 22 just seems a little excessive?


In other words, changing the default SSH port number is similar to using camouflage. It just helps hide that something is there, but it does nothing to improve the defense once spotted. However, if the majority of predators don't see you, then the rest of your defenses are needed at that time.


It's also an indication that there are no default passwords in use. So even if you know what port SSH lives on, there's a lower ROI to attacking it than a default port.


I think it is the opposite -- systems with rigourous security tend to be more open, because the designers are confident they understand their system. In contrast, systems that practice security through obscurity are often owned/managed by people afraid of what will go wrong.

We should distinguish obscurity from intentionally hiding the configuration, which makes attackers undertake discovery, and hence can lead to detection. But your internal red team / security review should have all the details available. If loss of obscurity leads directly to compromise then you don't have security. Cf insider threat.


Your example is advertising which is the opposite of security through obscurity.

Obscurity is another layer of hiding or indirection: like the owl has camouflage and it has a hole in a tree.

Advertising your fitness (your stotting metaphor) is effective when: you are part of a herd, and the attacker will only attack the weakest in that herd and then be satisfied. Like double locking your bike next to a similar bike that has a weaker lock.

Computer security is different because usually either:

a) everyone in the herd is being attacked at once (scattergun/IP address range scanning), or

b) you are being spear targeted individually (stotting won’t work against a human hunter with a gun, and advertising yourself won’t help against a directed attack).

An example of advertising your security might be Google project zero, or bug bounties.


>An example of advertising your security might be Google project zero, or bug bounties.

That's more akin to a gecko sacrificing it's tail IMO. You're taking a predator that's capable of a successful attack and rewarding them for not doing it at some cost to yourself. It provides an easy and less risky way of getting paid.


Using obfuscation is often a signal that you are a weak target, because there are a lot of places that use obfuscation but nothing else. A better indicator that you are a hard target is to enable common mitigations like NX, stack cookies, or ASLR.


There is one giant hole in your argument: both stack cookies and ASLR are mitigations that are nothing more than automated security through obscurity in the first place.


I assume you're equating picking a random SSH port with scanning for an ASLR slide or guessing a stack cookie, but they are different situations: processes that die are generally treated quite seriously, and they leave "loud" core dumps and stack traces as to what is going on–usually this gets noticed and quickly blocked. With SSH you can generally port scan people quickly and efficiently in a fairly small space (and to be fair, 16-bit bruteforces are sometimes done for applications as well, when possible)–and the "solution" here where you ban people that seem to hit you too often is literally what you are supposed to be running in the first place.

And in general, the sentiment was "if you are using those things, you are likely to have invested time into other tools as well such as static analysis or sanitizer use" which are not "security through obscurity" in any sense, whereas the "obscurity" that gets security people riled up is the kind where people say things like "nobody can ever hack us because we changed variable names or used something nostandard" because it is usually followed with "…and because we had security we didn't hash any passwords".


How so? Stack cookies and ASLR are a form of encryption, where an attacker had to guess a random number to succeed in an attack.

Obscurity really just boils down to a secret that doesn't have mathematical guarantees. It's doing something that you think the attacker won't guess, just like an encryption key, but without the mathematically certified threat model, so you just hole that the attacker is using a favorable probability distribuy for their guesses


The attacker who had already compromised the integrity of the system in question has to guess or probe for a random number with relatively low entropy in order to do something useful and straightforward with that already compromised system.


Yeah, that's what I was trying to get at with my "in the age of automation" comment. If you go to a period in history without automation, then obscurity is going to be a lot more effective. And that's why I think people still want to go back to it. Obscurity is much easier to wrap your mind around than RSA, et al.

However, the psychological warfare video does make me think that there's still a place for obscurity after you've already used actual security measures. If you can find any technique that makes your attacker work harder vs some other target, then it feels like there's an economic value to doing it as long as the cost to you is relatively low.


The only downside I see immediately is that there's a counterweighted risk to obscurity in your security layer: you can confuse your own users (or yourself).

Many security tools I've used are downright user hostile in how little information they provide the end-user (or the admin!) regarding why an auth process failed. It incentivizes people to simplify or bypass the system entirely when they can't understand the system.


Semirelated. Anytime I have written a protocol with a checksum I implement a 'magic checksum' that just passes. And a debug mode that enables it and diagnostics. The reason is usually if somethings wrong with a packet of data the best thing to do is ignore it completely. But that makes development insane. So having two modes gives you the best of both worlds.


When moving to some scary streets in my travel,I would shout to my companion in the local language, trying to signal to potential thugs to choose chasing someone else. I did it once in Russia while in a dodgy neighborhood to buy vodka, and back in the 1990 when westerners were under some threats in the middl east.


The security example of stotting would be the exact opposite:

Remove all obscurity and expose all your techniques and algorithms and setting up bounties for people to break your defences.

See eg https://cloud.google.com/beyondcorp and https://cloud.google.com/security/beyondprod where Google gives up on VPNs.


> In nature, prey animals will sometimes jump when they spot a predator[1]. One of the explanations is that this is the animal communicating to the predator that it is a healthy prey animal that would be hard to catch and therefore the predator should choose to chase someone else.

I think this analogy perfectly explains my hostility to security by obscurity. When I see a system that uses standard ports and demonstrates best practices, I think "oh well, they probably know what they are doing." When I see a system using strange ports and / or has extra extraneous crypto, I think "well, maybe this guy is an idiot" and take a deeper look.


I think the predator/prey real world analog to "security through obscurity" would be camouflage.


I've heard a better analogy - security by obscurity is like camouflage on a tank. A tank has massive armor and a terrifying gun to defend itself with. But even a half-assed camouflage can delay enemy reaction by a few seconds. Sometimes it's all it takes, because it lets you shoot first. In addition, the cost of camouflage paint or a net is laughably low and can be replaced in the field. It's simply an extra layer of protection and a very inexpensive one.


Its also a terrible thing to say to your pointy haired boss, which is in part where I think the heresy of talking about it comes from. If you say "security" that's the last thing your boss heard you say "oh yeah obscurity, great, that's cheap!" and you end up in camouflage in the field, with no tank.

I don't like the article the more I think about it, it speaks of inexperience and perhaps not understanding the concept well, and that it is important to accurately articulate and distinguish concepts. No one is going to argue that camo isn't an advantage to a soldier, but it is not security in any meaningful sense, no more than camouflage is a bunker, or a trench, or a tank.

And Camo comes with a real downside too, just like in the field, if you're camo'ed too well, you're apt to take friendly fire or be missed by artillery lobbing a shell.


https://www.reddit.com/r/netsec/comments/ioxux2/security_by_...

(security by obscurity ) is camouflage, not armor.


An argument against obscurity is that it adds additional pains for your "regular" users (as in developers/3rd party developers/app developers) while being a small deterrent against unauthorised users (as they will be able to circumvent the "obscurity layer" and replicate their method to other bad actors).

edit: In the first sentence "against" is not what I wanted to say: what I wanted to say is that it "downgrades it's effectiveness". I agree that obscurity can and sometimes should be a layer of security.


> An argument against obscurity is that it adds additional pains for your "regular" users (as in developers/3rd party developers/app developers)

No one should be applying obscurity to public-facing APIs or anything for which documentation is widely distributed outside the company.

A better example would be Snapchat's intense and always evolving obfuscation strategies: https://hot3eed.github.io/snap_part1_obfuscations.html

Even though someone took the challenge to de-obfuscate most (but not all) of the protections, just look at how much effort is required for anyone else to even follow that work. More importantly, consider how much effort is required relative to other platforms. It's enough of a pain that spammers and abusers are likely to choose other platforms to attack.


When security is totally impossible because there is no way to distinguish a trusted party from an adversary, obscurity is the only hope.


If you cannot distinguish a trusted party from a malicious party everything is then potentially malicious. This is why we have certificates, certificate revocation, and trust authorities.


And that works great until a trust authority gets compromised. It's for this reason why the US DoD has it's own root certificate authorities and thus many military websites actually look like they have invalid https certs. Browsers don't ship with DoD root certs installed as trusted.


Yeah, I am on a DODIN as I write this. In the civilian world a CA falls back on a decentralized scheme called Web of Trust which allows CAs to recipricate certs from other CAs and invalidate other CAs as necessary.

The DOD chose to create their own CA scheme originally for financial reasons in that over a long enough time line new infrastructure pays for itself with expanded capabilities while minimizing operation costs dependent upon an outside service provider. This was before CACs were in use.

https://en.wikipedia.org/wiki/Web_of_trust


Thanks for the additional info, I didnt know (but probably should have assumed) that finance was the primary motivator. I just had to implement CAC authentication for a webapp and they still use their own CAs for client-side certs aka cac’s so it seems like it was a pretty savy investment at the time that’s not going away anytime soon


Agreed. The maxim warning against "security from obscurity" is often reduced to an irrational comprehensive avoidance of obscurity. It's similar to the irrational avoidance of all performance optimization because Knuth warned of premature optimization.

Both reductions lose practical utility by omitting nuance.

* Avoid wasting your time doing performance optimization until tuning is necessary. But definitely take obvious and easy measures to ensure your software is fast, such as choosing a high-performance language or framework with which you can be productive.

* Don't exclusively rely on obscurity. But definitely take obvious and easy measures that leverage obscurity to add another layer of defense, such as changing default ports, port-knocking, or whatever.

To use the same art of reduction to counter the common interpretation: A complex password is, in a manner of thinking, security from obscurity. Your highly complex password is very obscure, hence it's better than a common (low obscurity) password from a dictionary.


> But definitely take obvious and easy measures that leverage obscurity to add another layer of defense, such as changing default ports, port-knocking, or whatever.

Except that can lead to operational problems down the road. For example "oh yes, we're nice and secure, not only do you need a 512bit private key to get into this device, you also need to connect from a secure network"

Then along comes covid, and you can't get into the building.

"Oh dear, you're not on the secure network, you can't come in"

So you spend 2 hours (while your network isn't working right and you're losing customers) finding and getting in through a back door.


I would call that system secure. It does not just rely on an obscure password but is actually restricted by a list of whitelisted networks.

The failure in that case is only that the admin didn't consider that normal work might be done from home at some point or that the middle or upper manager thinks that he should be able to freely administrate his critical infrastructure from anywhere...


IP whitelists break so often for "unanticipated reasons" that I've lost all sympathy for not anticipating it. Doubly so for using a whitelist to lock yourself out of the whitelist admin.

It's so common the security community should make it a meme to spread awareness: Don't get pwned by DHCP while running from SSH 0-day RCEs.


Of course security can lead to operational problems. Security is a trade-off against convenience.


> Of course security can lead to operational problems.

So can lack of security.


> Except that can lead to operational problems down the road.

in the example you mention the 'security' is working by design, but the operational parameters changed which in turn made that security model unsuitable - so it is the parameter change, rather than the 'security' is what led to the problems.

The original system could have been just as 'obscure' but also included an appropriately secured mechanism that allowed for this kind of remote access / disaster scenario.


That isn't obscurity. And in your scenario, there was a security hole if the requirement was that you had to be on the intranet, but someone was able to gain access from the outside.


The number of times I've seen people shitting all over port knocking is truly confusing. Since we added it several years ago, we've not had a single case of hackers trying to break into sshd. Before port knocking, 100's a day, even though it was on a very unusual port.

I try to tell people this, when they poo poo port knocking, but they just don't get it.

EDIT: s/the/they/


But serious question -- what exactly is the benefit? Before, it's not like they were getting in anyways if you were using keys.

So I confess I still don't "get it". Unless you just want cleaner logs or something. I assume you're still getting the same number of initial connection attempts per day, but just not recording them?

Is it something to do with network or CPU consumption related to failed subsequent attempts by the same actor? (Which, the same as port knocking, should be rate limited anyways?)


There have been bugs found in SSH server implementations that allowed limited remote code execution or even authentication bypasses. Missing an update or two isn't bad when nobody can figure out how to connect to your server.

Of course you have to update at some point. However, if someone drops a zero day on your SSH server while you're asleep you're probably glad that you've got a secret sauce to protect your server, letting the vulnerability bots focus on other servers.


If port knocking existing in a vacuum, sure. It'd be great.

The issue is there are other options that are better - like VPN only access to SSH - that you can use instead of (or in addition to)

If everyone advocating for port knocking was also saying set up VPN only access, sure. It's an additional authorization factor via where ports are used as a proxy for a PIN. But I haven't seen a single person in here saying they use it in addition to a VPN - people are saying it's their primary form of protection.

You can setup a wireguard VPN in as much time as it takes to set up port knocking. Now you have all of the benefits port knocking provides, and more. And you could even still set up port knocking in addition to the VPN if you really wanted to, but I would argue there's not much point.


Curious, how does this work? I am not very familiar with VPN. Is the VPN connection setup for the SSH session only? What if someone needs to have multiple SSH session, going to different networks altogether?

Im thinking it could be pretty impractical to go onto a whole other network to open an SSH session.


>Curious, how does this work?

It depends on the implementation. For a client <-> server VPN, it creates an interface on your local machine that corresponds to the network address range for the VPN, and tunnels traffic to the remote end.

For a site to site VPN, two appliances create a tunnel between them, and traffic is routed over that tunnel via the same sort of routing rules you normally use.

> Is the VPN connection setup for the SSH session only?

It can be. It can also be configured for all traffic, or some other combination.

> What if someone needs to have multiple SSH session, going to different networks altogether?

You can have multiple VPN connections to multiple networks. It can get complicated if the VPNs are using overlapping IP space.

> Im thinking it could be pretty impractical to go onto a whole other network to open an SSH session.

I'm not entirely sure why. Millions of people use VPNs every day for a variety of reasons, including SSH. I currently have 8 saved VPN configurations in my wireguard client, and connecting to one is as simple clicking on the client and picking the one I need in the dropdown. Then I SSH as normal, except its to the server's private IP and not public.


Why aren't you concerned that bugs will be found in your port knocking implementation?

I think the main concern with port knocking is that it's observable. You're effectively sending your password in clear, so if someone can intercept or overhear your traffic then your secret is lost. Cryptographic authentication schemes like SSH itself or VPNs do not have this problem.


Port knocking is a way to decrease the amount of random 0day/brute force scripts finding their way into your server. It will only stop automated scripts and attackers that don't know who you are. It's obviously no protection against incentivized attackers.

A VPN has upsides and downsides. It obviously protects your server a lot better against directed attacks, but when you lose your laptop or when your computer gets ransomware'd, you can't get access to the server anymore.

Furthermore, code execution vulnerabilities have been found against VPN servers because of their immense complexity and OpenVPN can consume quite a lot of resources for a daemon doing nothing. WireGuard has changed the VPN landscape with its simplicity, but if you fear your server may not be updated all too often (because it's partially managed by a customer, because your colleagues might not care to do so after you leave), leaving a simple solution behind can have its upsides.

I'm not advocating that everyone should enable port knocking on their servers to make them secure or anything, but the "port knocking is always bad" crowd is often very loud despite the fact that there are small little ways port knocking can improve security with very little effort or increased attack surface.


but what does getting past port knocking help with? now they have to find a bug with ssh.


If OP has a false sense of security due to port knocking the ssh may not have been updated as recently.


We update sshd daily, as we are on CentOS and use the official updates. Nice guess, though.


From what people have told me is the point is to remove automated attempts from logs so that when someone actually works out how to try to connect it becomes a strong signal that you have a real attack and you can check the logs to see if they are using real usernames or some other info to suggest that they know more than just random spam attempts. Normally dedicated attackers blend in with the random noise of the internet.


It is as simple as reducing an attack surface. If attackers can't talk to sshd, then can't try to hack it. In a world where zero days are real, why chance it?

Why is that so hard to grasp. Still boggles my mind.


The same with "GnuPG is bad" mantra on hackernews. There is nothing better that GPG currently for all its functionality and the only answer you get when asking for substitute is don't use this function or use some obscure application. Yeah right.


I agree that there is nothing better than GPG for the narrow scope of encrypting email. But I think there are very few cases where encrypted email is the most secure way to communicate, in lieu of other forms of encryption.


Encrypted email is almost a marginal usage scenario for GPG compared to other uses. It does everything. It is everywhere.Yes it is big, nobody has to use all of it. Just like C++... oh wait it is unpopular on hacker news bubble too despite being a juggernaut of a language. It will still be relevant long after hacker news will be no more.


I basically use GPG for one thing, at this point: signing git commits.

As far as I know, there isn't another GitHub/GitLab compatible way to do this. So I'll keep using GPG until there is.


Age is demonstrably better: https://github.com/FiloSottile/age

Also, an informed analysis of PGP: https://latacora.micro.blog/2019/07/16/the-pgp-problem.html


Informed analysis like lack of forward secrecy in something made for non ephemeral communication - for storing, sending files, digital signatures etc. Or backwards compatibility so you can access and verify your backups, archives etc. from 10 or more years ago.

Show me ephemeral encryption scheme for something that needs to be readable in the future like that.

This analysis is highly uninformed I would say.


It’s not an informed analysis, or even an analysis at all.

That’s not how analyzing algorithms or programs work. Even a basic threat model is missing.


Just respond "don't knock it 'til you try it... or rather, don't try it 'til you knock it!"


we've not had a single case of hackers trying to break into sshd. that you are aware of.


Yes, you have lovely clean logs to audit


Do you use single-port knocking or a sequence of port-knocks?


OP would respond but then that would break the obscurity! :-)


But the examples given won't help and is just bad advice in general.

- Assume for every code obfuscator there's a deobfuscator or at least someone as clever as you out there.

- Randomizing variable names it just a nuisance, it won't stop any competent pen tester or attacker.

- Encrypting the database is an odd one. Your program will also have to decrypt the data to use it. Where do you store the encryption keys? In your code? Don't assume obfuscating your code and/or randomizing variables will protect your encryption keys.


You seem to be thinking in terms of security mechanisms either perfectly blocking attacks or being useless. That's the wrong model. It's about costs. Obfuscating otherwise-open code doesn't mean that nobody can ever figure out what it does, but it raises their costs. Randomizing variables raises costs. Encrypting the DB raises costs on an amortized basis (some cracks may get the key and then it may not raise the cost much, but other cracks may only get the data in which case cost is raised a lot). Things are "secure" not when it's impossible for any actor in the world you don't want to get access to get access, but when the costs to those actors exceed the loss you may experience. (Preferably by a solid margin, for various reasons.)

As to whether this is good or bad advice, that depends on how expensive these things are (e.g., encrypting database fields may be very expensive if you write raw SQL calls as your primary DB interface but may be dirt cheap if you're using an ORM that has it as a built-in feature) and your local threat model (e.g., "dedicated, personalized attackers reading your source" is very different from "does it defeat automated scanners?"). You can't know whether these are good or bad ideas without that additional context.


> You seem to be thinking in terms of security mechanisms either perfectly blocking attacks or being useless. That's the wrong model. It's about costs.

This is something that bothered me quite a bit in Bruce Schneier's various comments on airline security. He repeatedly wrote that profiling young Arab men as likely terrorists was pointless, because if it became harder for young Arab men to get through security, terrorist organizations would simply start sending Japanese grandmothers.

But of course where it's relatively easy to find young men willing to die for a cause, it's much more difficult to find grandmothers who will do the same. And where it's relatively easy for an Islamic group based in the Middle East to connect to Arabic social networks, it's much harder for that group to connect to Japanese networks.


But it’s easier to find Arab women. Or an Irish girlfriend of an Arab man. Or two old Korean people.

(All real examples)


I'm pretty sure the potential recruitment pool of young Arab men is still many times greater than the pool of Irish girlfriends of Arab men.

It's about improving the odds/reducing the exposure, not achieving some theoretical absolute perfection.


It's bad math. It doesn't matter if you have 1 arab man and 1 korean woman, or 1000 arab men and 1 korean woman. You only need 1.

If you calculate the probabilities correctly, you get very different results.


No, obviously it's harder to find two old Korean people than one old Japanese person. Everything you've listed is hundreds, thousands, or millions of times more difficult than the young-Arab-man case.

Suppose you take down a plane with a young Arab man, and then you want to take down a second plane. There is a neverending stream of similar men willing to do the job. If your strategy requires you to use elderly Korean couples, you're done after the first plane -- you'll never find a second one.


This is also absent in the analysis of "security theater." I've often felt the "theater" does in fact have a material impact on target selection. One doesn't need to actually have a methodology that results in better capture of terrorists to deter them to other targets: one just needs a methodology that has plausibility of increasing the risk of failure. The unfalsifiability of "security theater" is actually a feature not a bug: it means there's always a non-zero weight on it's potential risk impact to terrorist considering air travel as a target.

All other things being equal, the opportunity cost will shift towards targets that have less elements akin to "security theater", since it's basically 'money on the table' to de-risk the attack.

So, the real question to ask about "security theater" is not if it has a material impact on human safety with flying, but if its deterrent effect pushes risk to places we'd rather it not go or if the costs of performing it do not outweigh this deterrence benefit. Given the potentially paralyzing effect it would have on the global economy if air travel were covered in a blanket of fear of flying, it's hard to argue that "decentralizing" this risk to other targets is a bad idea.


The problem with heavily focusing on Arabs while paying less attention to other threats is that Arab Islamist terrorists aren't the only problem aviation security needs to deal with.

Focusing most of the security effort on Arabs is a good way to fight the war of 19 years ago, but it leaves the air travel system vulnerable to upstart terrorist movements that see the lack of universal security as an exploitable vulnerability.

For example, there's nothing to say that America's right wing terrorist groups won't decide to switch from shootings and vehicle ramming attacks to attacks on air travel. The TSA ought to be prepared for this, or any other, emerging threat.


> You seem to be thinking in terms of security mechanisms either perfectly blocking attacks or being useless.

Ours is an industry with a lot of people "on the spectrum".

https://thesilentwaveblog.wordpress.com/2017/03/08/aspergers...


You’re only considering one side of the costs. Obfuscation mechanisms also impose a cost on your legitimate users. There’s lots of reasons why you want your users to actually buy in to using your security controls, and annoying controls with highly questionable effectiveness is the best way to kill that buy in. Users will only tolerate so much burden from user facing controls, so you want to make sure all of the controls you impose upon them are actually useful.

The other thing that’s harmful is relying on something to provide security, when it actually can’t. That’s actually going to have a negative impact on your threat model. People will say (they’re even saying it in this thread) that their port knocking or non-standard port usage has cut out the port scanning noise in their logs. But who cares? A properly secured ssh port isn’t going to be cracked by an automated scanning tool. But a poorly secured hidden one will be easily found and cracked by any motivated attacker. You have to implement the proper control anyway, and the obfuscation one ends up providing no benefit while simply annoying your users.

Security by obscurity is dumb, it doesn’t provide any benefit. Security in depth doesn’t mean multiple layers of controls that don’t work add up to one that does. Obscurity is just a way of spending your scarce resources on controls that don’t work, and wastes your scarce command of your users attention on controls that don’t work. So in reality, they’re also always coming at the opportunity cost of controls that actually do.


> but it raises their costs.

I would ask by how much.

Having to perform source audits on code with obfuscated variable names added almost no time to the task.

Again, these methods work against not-so-determined attackers. If you as a defender have limited resources, where would you choose to spend it--on defending against unskilled attackers, or attackers that are more likely to cause you damage?

>but when the costs to those actors exceed the loss you may experience.

There are several problems with this logic. First, it kind of presumes that there is a symmetry in the costs for the attacker and defender. Wise defenders will use methods that have high leverage. Also, the attacker doesn't care at all about your costs. They care about what they can get from you--whether it is access to something that you aren't thinking of, or your crown jewels.

Encrypting databases is sometimes required by compliance, but is no defense against a good attack.


It's still bad security.

Sure, it increases costs for a certain subset of attackers. Instead of sending easily found and trained young Arab men, they have to put more effort into recruitment. However, in return for that, they get far reduced scrutiny.

Therein lies the problem. It is the real-world equivalent of dropping all packets from a country instead of properly analyzing the packets. You'll stop the low-cost automated garbage attacks, but you won't stop a dedicated attacker, even if the attacker is in that country.


> Assume for every code obfuscator there's a deobfuscator or at least someone as clever as you out there.

There is still some information lost in the process:

- "let eigenvector_coefficient = 23" => "let x = 23"

A de-obfuscator isn't going to be able to recover the valuable information contained in the original name. Will it stop a determined attacker? Maybe not, but it would surely slow them down as they now need to spend an order of magnitude longer trying to understand what the code is doing.

> Randomizing variable names it just a nuisance, it won't stop any competent pen tester or attacker.

Believe it or not, nuisances are enough to stop some people. A lot of would be attackers are just cruising for low hanging fruit.

Remember, the goal is to "reduce risk" and not "stop any highly skilled targeted/tailored attack". Because let's face it, even if you are the greatest crypto wizard in the world, you will fall victim to a highly sophisticated attack tailored specifically to you.


>Believe it or not, nuisances are enough to stop some people. A lot of would be attackers are just cruising for low hanging fruit.

It is not "some people" that I worry about. I worry about attackers with a level of skill.

As I noted elsewhere in the thread, I have audited obfuscated code and the obfuscation is only a speed bump. I can only presume that attackers are smarter than I am, and obfuscation is effectively not an issue. And it is not an order of magnitude. This is another example of developer thinking that this form of obscurity is of any real value. Reviewing code will tell you if eigenvector_coefficient is really what it claims to be or something that morphed into something that the developer didn't originally intend.

Also keep in mind that code reviews approach code from a totally different angle than a developer would either developing or during a code walkthrough.


It might make sense in some contexts, but code obfuscation is a great example of where software engineers think it provides security where it provides none.

Developers often have some idealized notion that an attacker is going to need to piece their program logic back together and try to decode the purpose of each obfuscated variable in order to find a hardcoded password/value.

In reality an attacker is just going to dump strings and try them all or simply set a breakpoint just before the important syscall and let your program do the work. Code obfuscation provides little to no value for these common methods, yet we cannot resist the urge to list it as a bullet point in security meetings, leading to a false sense of security.


Exactly. If you're running crypto and think getting rid of variable names is going to stop people; it's not. Any off-the-shelf algorithm is usually easy to recognize to an accomplished reverse engineer with a basic background of what kind of things they're looking for.


Honestly, the modern JavaScript toolchain is better at giving reverse engineers a headache than 80% of binary obfuscators.


As someone who is not very good at JavaScript reverse engineering, I would tend to agree that minifiers are pretty annoying.


So the first thing that I do with one of those is to parse it and convert it to s-expressions. Problem solved.


>simply set a breakpoint

I knew nothing about this topic in general, but elsewhere in this thread there was a link to a blog post about obfuscation methods used in a piece of commercial software. One item was a function that detects a breakpoint, obfuscates its boolean return value so you can't tell if it did, and makes the program hang when it does. Pretty neat.

I think your (and my) ignorance of such methods is evidence that they probably are reasonably effective, even though when explained, they're not quantum physics.


Let me give you an example. At a previous job as a devops, of my predecessors frequently used these "techniques, minus the encrypted database but I sure he would have done it if he knew how. So there was some buggy internal app they needed some new features added to and the person who wrote it thought he was clever and obfuscated the code. It took me a whopping 30 minutes churn through his 'clever' obfuscation scheme and the randomized variable were just a nuisance. Honestly his best obfuscation technique was his horrible code that made no sense.

Even OP's advice about running services on non-standard ports isn't sound. Who doesn't run a service scan? Even sites like Shodan do service discovery for you. I'm going to find whatever port your running ssh on if your running it.


> I'm going to find whatever port your running ssh on if your running it.

I still think it's a good idea. With SSH on port 22, ten thousand bots plus an attacker try to hammer it (so says fail2ban). With SSH on port 9278, zero bots plus an attacker try to hammer it. By throwing away the 99.99% of the chaff, you can see the remaining wheat you care about.

Changing SSH ports isn't about saying "yep, we fixed it!" and calling it a day. It's about decreasing the amount of stuff you have to deal with, which is quite useful. It's something you can do in addition to everything else that gives a decent bang for its buck. No, it doesn't keep you out, but it does keep out those thousands of bots crawling around looking for an open 22 to pester.


But new people in the industry shouldn't think that the things recommended in the article should be used as a primary defense and are accepted industry practices. Moving SSH to a new port to reduce false security alerts is one thing, having people read that article and walk away thinking this is how we do things is another. We don't.


I didn't take that away from the article at all. It said:

> So let’s talk about security by obscurity. It’s a bad idea to use it as a single layer of defense. If the attacker passes it, there is nothing else to protect you. But it’s actually would be good to use it as an “additional” layer of defense. Because it has a low implementation cost and it usually works well.

I think it's good to do those things in addition to the other stuff. Obscurity isn't sufficient by itself, but is another layer of defense.


In addition to the stuff you should really being doing? That stuff is hard enough for beginners, without confusing them with speculation like this that goes against best practices and common sense, especially without clearly explaining the pitfalls and real dangers to each of these hypothetical scenarios. Besides, if you're already using industry accepted solutions to security problems and someone manages to gain unauthorized access anyways don't expect any of this amateur crap to offer any real protection at that point.


I don’t feel as though you came away with the real intent of the article, which didn’t make the arguments you’re shouting down.


> I'm going to find whatever port your running ssh on if your running it.

Not if you have to port knock before the ssh port is open to new connections.


Why not? I'll run my automated port knocker


Huh? How would that work? You have no idea what my port knocking scheme is.

For all you know you have to knock ports 22, 46, 1776, and 8998 to the timing of "shave and a haircut" switching between udp and icmp along the way... Good luck, the entropy you have to overcome is astronomical.


I would thing that blocking IP's of insessant knockers would be easy to implement.


They use proxies! So many proxies.


>Randomizing variable names it just a nuisance, it won't stop any competent pen tester or attacker.

Sure it will. Imagine that your old, unpatched Wordpress admin is at /random-gobbledygook instead of /wp-admin. An attacker would have to try to hit random alphanumeric directories of your webserver over and over again, hoping that he stumbles across a specific thing that they can attack. This is completely impractical, unless they're somehow clued in that the URL exists.

It's really about making life difficult for an attacker, so much so that they will simply give up, or find an easier target. That can be achieved by throwing up a series of difficult/obscure barriers, each which makes it less likely you'll be trivially penetrated.


I ran a world-writable off-the-shelf wiki for years. Trivially tweaked the edit url, visible on every page. But that was enough to break automated spam tooling defaults, so the spamming human might get to see a note, pointing out that robots.txt was blocking indexing, so there was really no reason to waste both our times. The dominant threat wasn't the spammer, but their dumb automation.


One of the first widespread security vulnerabilities I had to deal with was this one:

https://www.giac.org/paper/gcih/115/iis-unicode-exploit/1011...

You could basically encode DOS commands in the URL bar for a site running IIS and it would run remotely.

The automated attack basically replaced the index.html pages. But if you didn’t use the default pages. It didn’t have any effect.


> Assume for every code obfuscator there's a deobfuscator or at least someone as clever as you out there.

Then it filters out people who are not using a deobfuscator or are less clever than I.

> Randomizing variable names it just a nuisance, it won't stop any competent pen tester or attacker.

Then it will stop incompetent pen testers.

I don't see how your comment refutes the point made. The point is not that it makes your likelihood of attack zero, it just reduces the likelihood via adding more roadblocks.


> competent pen tester

So it does eliminate incompetent ones? That's kind of the point of the article.


If you have any bit of real security, incompetent people wouldn't pose a risk.

So, what's the gain?


> - Randomizing variable names it just a nuisance, it won't stop any competent pen tester or attacker.

But it will stop incompetent attackers - of which there are many. In fact, they are the vast majority.

None of those 'obscurity' techniques will stop a targeted attack. That's not their function. But each of them raises the bar. The more hoops, the better.


Isn't there some model where the keys that you use to decrypt the database are acting something like one time codes, and you have to use particular credentials to access the key server? So the attacker would need to be able to stay in the network to be able to actually access the data -- they couldn't just download the entire database and crack it offline. I don't know how that actually is implemented, but just curious on that.. I also wonder how many people put things in their various systems that are obvious attacker trip-mines -- like having some fake button that says like "copy image of database to disk" or something that the actual internal employees are told to never click that button -- but maybe they have even some confluence page that talks about "how to download the database", but is actually a fake entry meant to trip up an attacker in case they get access to your confluence pages as well as your database.. so they click that button and the admins get alerted...


Overloading names is a good code obfuscation strategy, but tricky and best done with code for obvious reasons (unless you like your regular code to present the challenges of BrainF). For instance, depending on your language, you may be able to have a variable, a function, an object, a pointer, a data structure, an index variable, etc. all called just "a".

Making sense out of code obfuscated this way is really* hard for humans, but will compile or interpret just fine so long as your obfuscator obeys the rules of your language. (We started on this at one of my early startups nearly 20 years ago, but didn't get funded soon enough for protecting the IP in our unique JS to matter. It was unique enough that we actually applied for a patent on part of it - drawing a 16 trace live strip chart of data from network sources at better than 4-10 Hz per channel was really hard with the bowsers and computers of 2002!)


The database key should be generated per encrypted database and then stored using something like the OSX keychain. The OS enforces that only a given application can retrieve that key (via application code signing).


Internally, we phrase it as "Make the system objectively hard, then don't tell all the details". Wasting an attackers time is a fine goal.


It's a lot like bike locks.

Yes, most people can grab some bolt cutters, snip, and bike off. Yet, so many bikes remain unstolen with extremely week locks.

The vast majority of attacks are crimes of opportunity. Hackers aren't generally trying to target a single company or computer for a bot net, they are looking to get as many as possible. Almost any amount of effort above and beyond the typical will cause them to jump past you as a target.

Back to the bike lock analogy. Again, most locks can be bypassed, getting one that requires an edge grinder will almost certainly ensure that your bike won't be stolen (Why steal that bike when there are 20 with simple wire locks?). Add 2 locks and you've got a bike that will almost never be knicked.

https://www.youtube.com/watch?v=oPDHPpnXPv8

This video can teach you a LOT about software security.


> Wasting an attackers time is a fine goal. reply

This. Putting a tarpit on port 22 isn't going to stop an attacker, but it will slow the ssh scans down for everyone.

https://github.com/skeeto/endlessh


Honeypots are fun, but be VERY careful how you deploy them. Ideally they are on a completely separate network on the WAN side of a second firewall. The last thing you want is for someone to find an exploit in your honeypot and use that to gain access to your network.


Security by obscurity is bad. Obscurity alone does not provide much security especially in a cryptographic setting. It can not be relied on as your sole protection.

Security and obscurity: if you make something secure and then obscure information about that system from an attacker that can increase the security. However obscurity is often organizationally expensive and very fragile. A key can be rotated, but changing how something functions is very hard to rotate.


Maybe the test should be: "is my system considered to be secure even without any obscurity?" If the answer is yes, then add obscurity.

For instance, the port 22 example. Suppose you have a bastion host. SSHD running on port 22, root password disabled, passwords disabled (only SSH keys), no other services running, all other ports filtered/closed. It should be fairly secure, even if exposed to the internet, right?

Now you can change the port. Change the SSH banner and hide the version. Add some port knocking. And so on. None of these measures would work by itself, but they will discourage non-targeted attackers.


They are also non-trivial amounts of work, both to build and to maintain..

Someday, someone will have problems connecting and waste half a day debugging it before they realize what is up.


For a very specific example, look at the classified ciphers used by the US Gov't TLAs. Why are they classified? Because if they are harder to get info about -- literally obscured -- then it's an additional layer of defense.

Or troop movements during war... Sure, the locations can be figured out, but by not broadcasting locations that's more work for the enemy and thus a bit more secure.

Obscurity is absolutely a key piece of security, because it adds the complexity of discovery.


This is true but I think that its not really a binary classification and there is a spectrum from useless and trivial obscurity (base64 encoding some "secret") to actually useful obscurity. After all, you can call password authentication "security through obscurity" since you only need to know the correct sequence of characters and your security relies on that sequence remaining obscure.


Many serious real-world scenarios do use obscurity as an additional layer

It works for the military, for spy agencies, and governments.

If obscurity didn't have any benefit, then the military's latest weapons wouldn't be tested in the Nevada desert, or some remote island; they'd be tested in Illinois, or off the coast of Long Island.


Most programming and IT sayings are grossly misinterpreted. My personal favorite is "premature optimization is the root of all evil," which originally came with a ton of context but today is often misinterpreted as "never worry about performance" resulting in a lot of slow bloated software.


Changing a port adds one bit of entropy. Not being forced to use "admin" as a username adds a whole bunch, but at least one bit. Not being forced to use https://url/admin also adds another bunch, but at least 1 bit.

Of course, if any of these things are known the entropy drops to zero... Just like a private ssh key that gets pwnd.

All too often I see tickets on open source projects asking for changes to allow better obfuscation, which are then denied using the mantra "obscurity is not security".

They all add bits of entropy to a security and/or threat model that maintainers ignore.


All encryption is "security through obscurity". The parameter space is very large. The key is somewhere in it. You have access to the whole space, but no clue as to where the key is. Good luck finding the key.

> Instead it was originally meant as "if your only security is obscurity, it's bad".

Since all security is essentially "through obscurity" somehow, I would simply reframe that into the onion model. Good security is like an onion, it has many layers. When you only have one layer, that's bad security.


I agree with the principle, but I disagree with the article's example of changing the SSH port as an example of obscurity. Lots of people set up SSH servers on multiple ports, especially in the case of relay servers that provide access to multiple machines through one IPv4 address.

A better example of security by obscurity would be to, for example:

* Flip all the SSH bits or XOR it with some long key.

* Encapsulate SSH inside another protocol, such as websockets over HTTP port 80, or embedded inside what look to an outsider as cat pictures being sent over HTTP.

* SSH over TCP over Skype video.

Incidentally, any of these methods work well for confusing China's firewall and keeping the SSH connection alive, and would probably confuse hackers as well for a little while. They could all be implemented in a router box that doesn't affect your actual deployment.


Hell yes, couldn't agree more.

This last year, I found out about knockd and if that isn't some awesome shit, I dunno what is. Yet, there are plenty of articles saying, incorrectly, how it's awful. It is simply another layer of security on top of everything else you have. Like you said, security by obscurity is more about making it fucking slow, irritating, tedious, and without any sense of reward. "Aha! After only a week, I've figured out you're port knocking! Oh shit... wait, you still totally have the server properly locked down. FML." Because after each "obscure" layer there is a "real" layer of security and hopefully those all those real layers buy you the time to detect and prevent the threat.


Also don't forget that relative effort matters too. Consider "The Club" protection for cars - in a lot, the one with The Club is chosen last to break into just due to its relative difficulty. (Weighted against the potential upside, obviously.)

The port knocking itself may actually be the strongest link in the chain, despite it being one of obscurity, if the population of targets in your "value pool" is large enough so that you are always below a sufficient number of others without knocking enabled, since all attackers will bounce to those when they realize they are not knocked.


> Instead it was originally meant as "if your only security is obscurity, it's bad".

no, not really. what it means is: every important sytem has attackers trying to exploit it. finding an exploit is a series of hunches while probing the system as a blackbox, and you need just one; meanwhile a defender has to be methodical enough find them all.

given the differences, obscurity removes the defender ability to systematically analyze the system while on the other hand for an attacker it remains as much of a blackbox as it was before.


It is obviously a misinterpretation of the original idea behind "security by obscurity is bad". Same goes for "goto considered harmful", which is not always true.

Although, Kerckhoffs's principle are a good way of describing how a secure cryptosystem should behave. This is what people should have in mind.

Obscuring will just add some delay as you state, but it might be irrelevant in many situations.


A simple example would be separating usernames and passwords, having an outer and inner password (think Truecrypt/Veracrypt) or even personal quirks. Again, it depends how much the attacker knows, but even today you can still do the classic "hash my master key with site name" for a password that you wouldn't store anywhere.


That's not the only problem with obscurity. It not only obscures flaws from attackers, it also obscures them from you and makes a system hard to maintain. In any complex system, ultimately there will develop chinks in your armor that owe their existence to obscurity hacks that were thought clever at the time.


I know someone who would rather store passwords/api keys in the database encoded in a way that is not clear text but is not encrypted or hashed arguing that its overkill to encrypt.

Obscurity instead of Security is bad too.


Sure, like giving a login page an unexpected URL to foil bots (eg hiding WordPress admin).

If that was the only security it’d be terrible.

But not having 1000s of bots pounding on the door saves a lot of headaches.


A lock that keeps an experienced lock-picker out for a few minutes will keep the layperson out indefinitely... Until they grab the bolt cutters. Everything is relative to context.


I agree.

The only thing I would add is that it also needs to be maintainable - the obscurity should not impede the maintainer's understanding of the implementation.


Sure, security by obscurity slows down bad actors, but in reality it's not by a significant amount. Often the obscurity that you add aren't even where they're looking. You have to go through a certain level of effort to add it the obscurity. That effort is not enough to warrant the insignificant slowdown of the bad actor. You're better off using that effort to improve your real security in other areas. In addition, you're adding complexity that you have to maintain.


It's fine as an additional layer only when the primary layers do not rely on obscurity.

I've seen too many instances where obscurity is used to justify weak primary layers (IE it's fine we're using this single word shared password since we have all these other layers). It can often provide a false sense of security since it looks like a security layer when in reality it often turns out to simply be a minor inconvenience to an experienced attacker.


Also, related, the kind of traffic needed to probe in a reasonable amount of time can easily be spot.


Well put. Using it as an additional layer isn't bad.


Agree, Its a good/cheap first step !




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: