Hacker News new | past | comments | ask | show | jobs | submit login
Things I set on new servers (simonholywell.com)
141 points by Treffynnon on April 23, 2013 | hide | past | favorite | 69 comments



This article is all http daemon related which is a very small subset of server config...

I don't want to devolve into all my own tips and then have arguments about fail2ban, but I have one tweak to the TRACE/TRACK note, really there are some silly things people can do with any requests other than POST/GET (also silly things people can do with those, but that's primarily App security).

I disallow OPTIONS, HEAD, TRACE... everything, with this sort of an Apache config:

<LimitExcept POST GET> Require valid-user </LimitExcept>

The biggest place this can cause issues is with load balancers using a HEAD command to check the existence of a server.


This is probably over-broad. HEAD is also used by browsers to determine whether or not to re-request cached content; disabling it will still allow pages to load properly, but will waste bandwidth and slow down page loads as browsers re-request unchanged content.

OPTIONS is necessary if you want to offer APIs that support CORS: http://en.wikipedia.org/wiki/Cross-origin_resource_sharing


It indeed is broad, as I said it is what I do.

I've found that some of the URL security that people setup only apply to GET/POST, they're unaware of these other methods (TRACK? WTF?). As such, it's nice to have these off by default, and turn them on when needed. I saw a good note some time ago from people essentially stat'ing files they shouldn't have been able to see at all via OPTIONS or HEAD because they were secured against GET and POST only.

There is a nice discussion going below on whether you need these for AJAX, conditional gets, etc. If you need this enabled for AJAX, consider enabling it for the URL or directory where your AJAX interactions occur. I run a large wordpress site and haven't had issues with any of this.


Why would a browser use HEAD instead of conditional GET to re-request cached content? It would require one more round-trip if a refresh is actually required.


The best is to use GET with an If-Modified-Since header: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14...

This acts like HEAD if the resource has not been modified and like GET if it has.

In practice this seems to be the go-to technique for browsers as well.


I've definitely observed both behaviors in Firefox, and to be honest, I'm not entirely sure why. I'll see what I can dig up...


HEAD just returns the headers, not the content. So, it is a faster way to see if content has been updated. If it has, then the client can request the content with a GET.


Right, the question was why not use a conditional get, which either returns content if the thing has been modified, or a 304 Not Modified and no content if it hasn't -- this accomplishes the same thing as HEAD with only one request. Above, though, I wasn't sure why one or the other wasn't always used...

I looked it up, though. It looks like you have to have an ETag to do a conditional GET, so browsers use HEAD if the original response (the cached one) didn't include one.


> I looked it up, though. It looks like you have to have an ETag to do a conditional GET

Nope, not the case. May I ask where you read that? It differs from my reading of RFC2616 and my experience.

If the server gave you an ETag though you're supposed to include it in future cache-conditional requests. Section 13.3.4 (http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13...) describes the interaction between ETags and modification dates.


A blog post that's probably not worth linking to, since apparently it was wrong.

Am I correctly inferring that you need at least a Last-Modified in the response to do a cache-conditional request, though?


Either Last-Modified or Etag is sufficient. Most servers will add both, though.

Edit: The difference is that Last-Modified allows clients to use a heuristic to determine if the response should be cached for a certain duration (unless explicit Cache-Control or Expires headers are used). The heuristic isn't specified, but a 10% fraction of Date - Last-Modified is suggested, and in fact, that is what e.g. Internet Explorer does.


I don't think browsers do use HEAD to determine whether or not to re-request cached content; they make a conditional GET, ie a GET with an If-Modified-Since header (or one of its relatives).

Are there some situations where browsers use HEAD instead of conditional GET? Or is my understanding wrong?

Still, HEAD seems like it should generally be fairly safe, so not something i'd leap to block.


Why are you disabling OPTIONS and HEAD?


Seriously. I've used AJAX libraries that won't POST data if the OPTIONS call fails.


Won't that cause issues when going through a typical paranoid corporate web proxy?


Why disable HEAD. HEAD has many great uses including figuring out what the web server is picking as a Content-Type.


> Hide your versions

> Another super simple, but often overlooked adjustment to make is to prevent the server from broadcasting too much information about itself. Whilst attackers maybe able to source the information in other ways the harder we make it the more likely potential attackers are to give up and move onto a softer target. It is similar to introducing yourself to someone and giving them specific details about yourself such as "I rarely lock the back window when I pop into town".

newsflash: exploits will hit the vulnerability anyway, without asking about your name and version.

could somebody explain to me why these security by obscurity measures are still popular? especially in an age when running bots hitting every public facing piece of equipment is so cheap?


> especially in an age when running bots hitting every public facing piece of equipment is so cheap?

For me exactly that is a reason to disable versions. If I would be the attacker I would probably do a

    SELECT hostname FROM scanresults WHERE webserver == vulnerable-version
Besides that, a header like:

    Server: Apache/2.2.16 (Debian) 
     DAV/2 SVN/1.6.12 PHP/5.3.19 mod_ssl/2.2.16 OpenSSL/0.9.8o
    X-Powered-By: PHP/5.3.19

...gives me a lot of clues for adjusting my exploit payload.

In practice it may be an obscure attack vector - who knows. But more information is always helping the attacker.

If there is a database somewhere (it is) - I don't want to have my specific versions in there. Who knows that kind of exploit is on the HN frontpage tomorrow.

"Security by Obscurity" is a concept used by bad cryptographic ciphers. See Kerckhoff's Principle. I don't see how this applies to server information disclosure.


Exploits that would 'need' to know the version are scripted. They spew the attempt anyway. If it works it works, if it doesn't the script moves on.

You don't really get anything by disabling the server from reporting it's version. On the other hand, it doesn't take any time to do it.

But don't think you gained anything by disabling it.


I suspect there's some small measure of protection still - if there's a brand new exploit available (say, the current WordPress W3 Total Cache and WP Super Cache problems), I'd rather not have my sites rise to the top of the list of addresses for the botnets to start probing - because I'd left the version information publicly available (or made those sites trivially findable via a googledork).

It won't protect me, but it might buy be a little extra time to respond...


I did not wanted to say that disabling version information is a security measure. It is not. Sorry if I was not clear on that.

I just wanted to say it may give you some time. That's debatable. You and others disagree. I remember an exploit for ProFTPd that required different payloads for different builds (OS, Version, Architecture).

Having a database with version data und using that database would maximize the attackers chance to own the machine.

Before checking the whole internet, why not at first try all machines where the exploit would work more likely. After that try all other machines.

Another issue I remember was PLESK¹. They added some versions ago a Header "Powered-By Plesklin". There were a few PLESK exploits in the wild where an attacker with a database of versions and headers for services could quickly attack them.

That is a reason for me to disable versions. It is totally unrelated to other security measures. I just don't want to give the attacker any additional clue. The more time he needs to spend probing my server the more time I have to notice him.

I'm not sure how easy it is to order a botnet to execute an exploit for the whole internet. Other comments suggest it is no problem anymore. If that is indeed the case, it's probably really not relevant to disable versions or not. However I think I outlined my reasoning.

1: I know, don't use it. This was not up to me.


I suppose even a hypothetical expensive exploit (requiring ten minutes per attempt for example) would not deter someone with a large botnet from trying it against every server. So while hiding version info might save from some attackers in some scenarios it doesn't provide real protection.


> If I would be the attacker I would probably do a SELECT hostname FROM scanresults WHERE webserver == vulnerable-version

you would suck.

1. write exploit

2. send exploit

3. a) it worked - pwned b) didn't work - safe against this exploit

btw have you heard of security patches?

> I don't see how this applies to server information disclosure.

http://en.wikipedia.org/wiki/Secure_by_design


There are some benefits to removing things like banner headers especially where there is limited effort required to do so..

If the version headers are removed a manual attacker would have to try harder (make more requests) to attempt to identify whether the server software is a vulnerable version or not. this increases the opportunities for detective software controls (e.g. IDS) to detect the attacker and to potentially allow for defensive actions (e.g. IP blocking)

Also if a server version banner is present an at vuln. is discovered in that version its much more efficient for attackers to only hit known vulnerable versions and those can be mined from either things like shodan or the Internet census 2012 data.

A smart attacker would probably want to only hit known vulnerable targets to maximise the time before their attack is noticed and analyzed by defensive organisations and if you hit all servers, that'll include all the honeypots out there, making it more likely that your attack gets noticed and new signatures are pushed to alert/block it.


While you're true, it's only takes 5 seconds to change the config. Plus some security certifications (eg PCI) check those signatures during their automated scans.

My problem is when people think hiding their signatures makes their systems more secure rather than equally secure but with less garbage broadcast


Sure, PCI DSS.


for audits?


Yes, or rather most automated tools complained about reporting various version numbers.


I see I got rage-downvoted by sbdy in this thread, nice. gotta love hn butthurts.


The title is a bit misleading, as it mostly applies to web daemons configuration, rather than to the servers themselves, and, while being a nice addition to default configuration, is extremely narrow and not really enough when it comes to securing (web or any other) server, while novice reader can get under impression, that it's it.


I think I do a reasonable job of making that clear with the introductory paragraph, but yes it is something that cannot be understated. They are just three little things that do not constitute a complete security policy.


My #1 install on any server is fail2ban, then it's server specific stuff.


If you're using SSH keys exclusively, what does fail2ban really buy you? The HTTP monitoring sounds like it might be useful, but also might be an easy way to reject the Googlebot and de-list your site.


Fail2ban can be used to rate-limit nearly all services that have the potential for abuse. I have it set up to track connection and message frequency, bans on message content (not following protocol, overly large, malformed, etc) and so forth.

Having a system like F2B is nice because it compartmentalizes abuse handling and you can set up rules in one place for all your services, both user-facing and not. Since the rules/actions are user defined, anything is possible -- I've had actions that send alerts to Twitter, a system that distributes bans to hundreds of servers, and centralized logging that gives very good insight into how users are poking around.


fail2ban should be the first thing to install, first thing to manage via puppet/chef, first thing to have centralized logged etc :p


Unless you restrict SSH access to a small set of known-good IP addresses, of course.


Fail2ban monitors more than just ssh. I uses it against http auth, suspicious http bots, and all sorts (I even have fail2ban watching irc connections on one box)


>>>(I even have fail2ban watching irc connections on one box)

Now you got me curious - watching what?


I've always wanted to do this, but then I thought "What if I suddenly lose my IP address?"


I'm always super paranoid about this too, but usually you can get back into a machine via console (if it's a virtual machine) or via KVM if dedicated.

But it still scares me too much...


Use ssh keys instead then :-)


Keys are additional credentials, so they don't add any security by themselves. You have remove a password from an account (set unusable password).

However, there are rare cases where you need to access the server from some remote location, when you don't have your SSH private key at hand, and the only credentials you can use, are the those you keep in your head.

Obviously, the most important requirement is a strong password, but protecting against brute-force won't hurt.


> Keys are additional credentials, so they don't add any security by themselves. You have remove a password from an account (set unusable password).

Keys add security if you turn off password based logins (this is done in sshd_config - you don't need to mess about with the users passwd)

> However, there are rare cases where you need to access the server from some remote location, when you don't have your SSH private key at hand, and the only credentials you can use, are the those you keep in your head.

> Obviously, the most important requirement is a strong password, but protecting against brute-force won't hurt.

You're point about not having private keys to hand is a very valid one; and why I opt for fail2ban ssh rules against password logins on my own personal servers. But the strength of keys compared to passwords does make key based authentication a good measure against brute force attacks (purely in terms of the time line to to crack a key)


Regarding the "don't have the keys" issue, I solve this with an encrypted TrueCrypt volume in Dropbox. Dropbox has 2FA set up on it, so getting into my servers requires 1) my dropbox password, 2) my phone, 3) the volume passphrase, and finally 4) the key passphrase.

As long as I have my phone on me, I can get into my servers, but am reasonably confident that a Dropbox compromise or phone loss would not result in my server credentials being compromised.


Yup.

Fail2Ban ModSecurity (for whatever web server, including Nginx) And the OWASP rules for ModSecurity.


Nothing

Automated network installer (eg cobbler) installs OS, which installs a configuration management system (eg puppet or chef or ansible) which sets up the server appropriately.

Done correctly someone logging into a non development server should be an alertable "red flag".

Even for a development server you should use veewee, vagrant, box grinder etc etc to produce something consistent and repeatable.

"Editing a file in /etc directly 'by hand' should be an obscure art done to teach internals or to scare children on halloween." -@yesthattom


Ideal world, meet actual world.


Automating infrastructure and treating it like code is a similar shift in mindset to embracing test driven development for the first time.

It appears daunting, but once you get over the hump you can't imagine how you ever survived without it.

If you have a mythical quiet Friday afternoon install Vagrant and try and replicate your manual setup steps for a new server and share it with your development team.

Even just having the steps required to set up a development environment represented in re-usable versioned code is worthwhile.

Next time a new hire starts that afternoon repays itself when they have a fully working dev environment ready in less then an hour.

Going from that, to doing this stuff in production is a lot of work, but you get similar pay offs at every step as long as you're willing to invest a little time.


You don't have to convince me it is good. In my entire career I have never seen a company that manages even 50% of their servers this way. It has always been a situation of engineering 'being too busy cutting wood to make better saws'.


The company where i work manages >90% of its servers that way. This company is blessed with some extraordinarily bloody-minded sysadmins who made the time to sharpen the saw in the face of mounting piles of wood to cut.


Your sysadmins manage themselves where you work?


1. install rkhunter

2. update it: #rkhunter --update

3. generate checksums of important files: #rkhunter --propupd

*NOTE: when normal system s/w updates are installed, some of the files watched by rkhunter may change and thus generate false warnings. It also needs to be run again to update checksums after updates.


What stops an attacker running rkhunter --proupd after he/she has installed backdoors in a few of your binaries ? I realise what rkhunter does (searches for common backdoors), but I can't see what advantage the proupd argument adds.


It is useful if accounts other than root have been compromised, like the web server's.


How does an article like this actually get points on HN? It's called "Things I set on new servers", but it should really be called "Things That I Configure in Apache", and the suggestions aren't even anything all that interesting or useful. Do people really even use Apache any more?

What?


When silencing Nginx's version number, what is the value in continuing to supply the "Nginx" header to indicate which product it is?


Two reasons, I think: one (as a sysadmin once explained it to me) is that there's a certain degree of public good/advertising that comes from publicly supporting an open-source project by advertising that you use it in your headers. Services that aggregate web server market share (Netcraft, etc.) use the Server header to build stats.

It's also not that hard to fingerprint webservers (though not necessarily their specific versions) without making use of the Server line by testing for other subtle differences in behavior (see, for example, http://82.157.70.109/mirrorbooks/apachesecurity/0596007248/a... ). So on balance, hiding the version makes it hard to single you out for vulnerabilities in specific versions, but hiding the server name altogether doesn't really add much.


Also check for this dangerous configuration bug if you are using FastCGI and PHP (I'm not sure if this also applies to other FastCGI applications)

https://nealpoole.com/blog/2011/04/setting-up-php-fastcgi-an...


Does anyone have a link to an article explaining TRACE attacks? I had never heard of it (after 8 years of web development!).

Edit: Oops, should have Googled before commenting: https://www.owasp.org/index.php/Cross_Site_Tracing


For Rails folks, you can accomplish a lot of this with the excellent Twitter gem "secureheaders".

https://github.com/twitter/secureheaders


Can anyone explain how to do this with nginx? I'm not using rails but I would really like to have all those. I'm not sure what's best though.


Is there smth similar for Python?


What about a great default content security policy? :)


In the examples you provided, I believe you meant to say `top != this` instead of `top != self` as you had it. minor edit


Gotta wonder why these aren't defaults.


PHP also uses cookie named PHPSESSID to store session ids. Use session.name to specify a different one.


That has no impact on security. If an attacker can read your cookies then it didn't matter if you're sessions are called PHPSESSID or WETTROUT, they're still readable.


It only matters if you're trying to disguise the fact that you're using PHP, as the article suggests.


Which again, doesn't add any additional security.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: