This article is all http daemon related which is a very small subset of server config...
I don't want to devolve into all my own tips and then have arguments about fail2ban, but I have one tweak to the TRACE/TRACK note, really there are some silly things people can do with any requests other than POST/GET (also silly things people can do with those, but that's primarily App security).
I disallow OPTIONS, HEAD, TRACE... everything, with this sort of an Apache config:
<LimitExcept POST GET>
Require valid-user
</LimitExcept>
The biggest place this can cause issues is with load balancers using a HEAD command to check the existence of a server.
This is probably over-broad. HEAD is also used by browsers to determine whether or not to re-request cached content; disabling it will still allow pages to load properly, but will waste bandwidth and slow down page loads as browsers re-request unchanged content.
I've found that some of the URL security that people setup only apply to GET/POST, they're unaware of these other methods (TRACK? WTF?). As such, it's nice to have these off by default, and turn them on when needed. I saw a good note some time ago from people essentially stat'ing files they shouldn't have been able to see at all via OPTIONS or HEAD because they were secured against GET and POST only.
There is a nice discussion going below on whether you need these for AJAX, conditional gets, etc. If you need this enabled for AJAX, consider enabling it for the URL or directory where your AJAX interactions occur. I run a large wordpress site and haven't had issues with any of this.
Why would a browser use HEAD instead of conditional GET to re-request cached content? It would require one more round-trip if a refresh is actually required.
HEAD just returns the headers, not the content. So, it is a faster way to see if content has been updated. If it has, then the client can request the content with a GET.
Right, the question was why not use a conditional get, which either returns content if the thing has been modified, or a 304 Not Modified and no content if it hasn't -- this accomplishes the same thing as HEAD with only one request. Above, though, I wasn't sure why one or the other wasn't always used...
I looked it up, though. It looks like you have to have an ETag to do a conditional GET, so browsers use HEAD if the original response (the cached one) didn't include one.
Either Last-Modified or Etag is sufficient. Most servers will add both, though.
Edit: The difference is that Last-Modified allows clients to use a heuristic to determine if the response should be cached for a certain duration (unless explicit Cache-Control or Expires headers are used). The heuristic isn't specified, but a 10% fraction of Date - Last-Modified is suggested, and in fact, that is what e.g. Internet Explorer does.
I don't think browsers do use HEAD to determine whether or not to re-request cached content; they make a conditional GET, ie a GET with an If-Modified-Since header (or one of its relatives).
Are there some situations where browsers use HEAD instead of conditional GET? Or is my understanding wrong?
Still, HEAD seems like it should generally be fairly safe, so not something i'd leap to block.
> Another super simple, but often overlooked adjustment to make is to prevent the server from broadcasting too much information about itself. Whilst attackers maybe able to source the information in other ways the harder we make it the more likely potential attackers are to give up and move onto a softer target. It is similar to introducing yourself to someone and giving them specific details about yourself such as "I rarely lock the back window when I pop into town".
newsflash: exploits will hit the vulnerability anyway, without asking about your name and version.
could somebody explain to me why these security by obscurity measures are still popular? especially in an age when running bots hitting every public facing piece of equipment is so cheap?
...gives me a lot of clues for adjusting my exploit payload.
In practice it may be an obscure attack vector - who knows. But more information is always helping the attacker.
If there is a database somewhere (it is) - I don't want to have my specific versions in there. Who knows that kind of exploit is on the HN frontpage tomorrow.
"Security by Obscurity" is a concept used by bad cryptographic ciphers. See Kerckhoff's Principle. I don't see how this applies to server information disclosure.
I suspect there's some small measure of protection still - if there's a brand new exploit available (say, the current WordPress W3 Total Cache and WP Super Cache problems), I'd rather not have my sites rise to the top of the list of addresses for the botnets to start probing - because I'd left the version information publicly available (or made those sites trivially findable via a googledork).
It won't protect me, but it might buy be a little extra time to respond...
I did not wanted to say that disabling version information is a security measure. It is not. Sorry if I was not clear on that.
I just wanted to say it may give you some time. That's debatable. You and others disagree. I remember an exploit for ProFTPd that required different payloads for different builds (OS, Version, Architecture).
Having a database with version data und using that database would maximize the attackers chance to own the machine.
Before checking the whole internet, why not at first try all machines where the exploit would work more likely. After that try all other machines.
Another issue I remember was PLESK¹. They added some versions ago a Header "Powered-By Plesklin". There were a few PLESK exploits in the wild where an attacker with a database of versions and headers for services could quickly attack them.
That is a reason for me to disable versions. It is totally unrelated to other security measures. I just don't want to give the attacker any additional clue. The more time he needs to spend probing my server the more time I have to notice him.
I'm not sure how easy it is to order a botnet to execute an exploit for the whole internet. Other comments suggest it is no problem anymore. If that is indeed the case, it's probably really not relevant to disable versions or not. However I think I outlined my reasoning.
I suppose even a hypothetical expensive exploit (requiring ten minutes per attempt for example) would not deter someone with a large botnet from trying it against every server. So while hiding version info might save from some attackers in some scenarios it doesn't provide real protection.
There are some benefits to removing things like banner headers especially where there is limited effort required to do so..
If the version headers are removed a manual attacker would have to try harder (make more requests) to attempt to identify whether the server software is a vulnerable version or not. this increases the opportunities for detective software controls (e.g. IDS) to detect the attacker and to potentially allow for defensive actions (e.g. IP blocking)
Also if a server version banner is present an at vuln. is discovered in that version its much more efficient for attackers to only hit known vulnerable versions and those can be mined from either things like shodan or the Internet census 2012 data.
A smart attacker would probably want to only hit known vulnerable targets to maximise the time before their attack is noticed and analyzed by defensive organisations and if you hit all servers, that'll include all the honeypots out there, making it more likely that your attack gets noticed and new signatures are pushed to alert/block it.
While you're true, it's only takes 5 seconds to change the config. Plus some security certifications (eg PCI) check those signatures during their automated scans.
My problem is when people think hiding their signatures makes their systems more secure rather than equally secure but with less garbage broadcast
The title is a bit misleading, as it mostly applies to web daemons configuration, rather than to the servers themselves, and, while being a nice addition to default configuration, is extremely narrow and not really enough when it comes to securing (web or any other) server, while novice reader can get under impression, that it's it.
I think I do a reasonable job of making that clear with the introductory paragraph, but yes it is something that cannot be understated. They are just three little things that do not constitute a complete security policy.
If you're using SSH keys exclusively, what does fail2ban really buy you? The HTTP monitoring sounds like it might be useful, but also might be an easy way to reject the Googlebot and de-list your site.
Fail2ban can be used to rate-limit nearly all services that have the potential for abuse. I have it set up to track connection and message frequency, bans on message content (not following protocol, overly large, malformed, etc) and so forth.
Having a system like F2B is nice because it compartmentalizes abuse handling and you can set up rules in one place for all your services, both user-facing and not. Since the rules/actions are user defined, anything is possible -- I've had actions that send alerts to Twitter, a system that distributes bans to hundreds of servers, and centralized logging that gives very good insight into how users are poking around.
Fail2ban monitors more than just ssh. I uses it against http auth, suspicious http bots, and all sorts (I even have fail2ban watching irc connections on one box)
Keys are additional credentials, so they don't add any security by themselves. You have remove a password from an account (set unusable password).
However, there are rare cases where you need to access the server from some remote location, when you don't have your SSH private key at hand, and the only credentials you can use, are the those you keep in your head.
Obviously, the most important requirement is a strong password, but protecting against brute-force won't hurt.
> Keys are additional credentials, so they don't add any security by themselves. You have remove a password from an account (set unusable password).
Keys add security if you turn off password based logins (this is done in sshd_config - you don't need to mess about with the users passwd)
> However, there are rare cases where you need to access the server from some remote location, when you don't have your SSH private key at hand, and the only credentials you can use, are the those you keep in your head.
> Obviously, the most important requirement is a strong password, but protecting against brute-force won't hurt.
You're point about not having private keys to hand is a very valid one; and why I opt for fail2ban ssh rules against password logins on my own personal servers. But the strength of keys compared to passwords does make key based authentication a good measure against brute force attacks (purely in terms of the time line to to crack a key)
Regarding the "don't have the keys" issue, I solve this with an encrypted TrueCrypt volume in Dropbox. Dropbox has 2FA set up on it, so getting into my servers requires 1) my dropbox password, 2) my phone, 3) the volume passphrase, and finally 4) the key passphrase.
As long as I have my phone on me, I can get into my servers, but am reasonably confident that a Dropbox compromise or phone loss would not result in my server credentials being compromised.
Automated network installer (eg cobbler) installs OS, which installs a configuration management system (eg puppet or chef or ansible) which sets up the server appropriately.
Done correctly someone logging into a non development server should be an alertable "red flag".
Even for a development server you should use veewee, vagrant, box grinder etc etc to produce something consistent and repeatable.
"Editing a file in /etc directly 'by hand' should be an obscure art done to teach internals or to scare children on halloween." -@yesthattom
Automating infrastructure and treating it like code is a similar shift in mindset to embracing test driven development for the first time.
It appears daunting, but once you get over the hump you can't imagine how you ever survived without it.
If you have a mythical quiet Friday afternoon install Vagrant and try and replicate your manual setup steps for a new server and share it with your development team.
Even just having the steps required to set up a development environment represented in re-usable versioned code is worthwhile.
Next time a new hire starts that afternoon repays itself when they have a fully working dev environment ready in less then an hour.
Going from that, to doing this stuff in production is a lot of work, but you get similar pay offs at every step as long as you're willing to invest a little time.
You don't have to convince me it is good. In my entire career I have never seen a company that manages even 50% of their servers this way. It has always been a situation of engineering 'being too busy cutting wood to make better saws'.
The company where i work manages >90% of its servers that way. This company is blessed with some extraordinarily bloody-minded sysadmins who made the time to sharpen the saw in the face of mounting piles of wood to cut.
3. generate checksums of important files:
#rkhunter --propupd
*NOTE: when normal system s/w updates are installed, some of the files watched by rkhunter may change and thus generate false warnings. It also needs to be run again to update checksums after updates.
What stops an attacker running rkhunter --proupd after he/she has installed backdoors in a few of your binaries ? I realise what rkhunter does (searches for common backdoors), but I can't see what advantage the proupd argument adds.
How does an article like this actually get points on HN? It's called "Things I set on new servers", but it should really be called "Things That I Configure in Apache", and the suggestions aren't even anything all that interesting or useful. Do people really even use Apache any more?
Two reasons, I think: one (as a sysadmin once explained it to me) is that there's a certain degree of public good/advertising that comes from publicly supporting an open-source project by advertising that you use it in your headers. Services that aggregate web server market share (Netcraft, etc.) use the Server header to build stats.
It's also not that hard to fingerprint webservers (though not necessarily their specific versions) without making use of the Server line by testing for other subtle differences in behavior (see, for example, http://82.157.70.109/mirrorbooks/apachesecurity/0596007248/a... ). So on balance, hiding the version makes it hard to single you out for vulnerabilities in specific versions, but hiding the server name altogether doesn't really add much.
That has no impact on security. If an attacker can read your cookies then it didn't matter if you're sessions are called PHPSESSID or WETTROUT, they're still readable.
I don't want to devolve into all my own tips and then have arguments about fail2ban, but I have one tweak to the TRACE/TRACK note, really there are some silly things people can do with any requests other than POST/GET (also silly things people can do with those, but that's primarily App security).
I disallow OPTIONS, HEAD, TRACE... everything, with this sort of an Apache config:
<LimitExcept POST GET> Require valid-user </LimitExcept>
The biggest place this can cause issues is with load balancers using a HEAD command to check the existence of a server.