Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How to deal with SYN FLOOD and extortion
40 points by xal on March 24, 2009 | hide | past | favorite | 50 comments
We made a big mistake in our server setup. We bought beefy as app and db servers but went for ultra reliable sun ultrasparks and openbsd for firewalls. This was fine until the DDOS started. After 3 days of many different attacks we now how much better firewall hardware installed thanks to our fantastic colocation which went the extra mile and helped us when we needed their help most but we are now surviving on total brute force.

Our firewalls are faster then the SYN Floods that hit us. This is an arms race that we cannot win in the long run. Yes we can buy more hardware but it's much easier to infect more machines with bots over time.

How do people protect themselves against extortion and malicious ddos attacks? What software / hardware protects the bigger sites on the net?




Any large ISP should have a process and gear in house to mitigate DDOS attacks; you should escalate, first to your hosting provider and then to their upstream provider.

Every ISP does something a little bit different for large-scale attacks. Some of them off-ramp traffic to scrubbers, some of them have inline devices that can characterize and block SYN floods. My previous employer, Arbor Networks, now has gear deployed at something like 90% of the worldwide tier 1 and tier 2 ISPs that will detect any major flooding event and generate a report which might be helpful to you.

What I'd watch out for is the dozen odd fly-by-night operations that are promising you that, for a monthly fee or a one-time purchase of some $100k box, you can block these attacks on your own network.


I'd extend that warning... watch out for any company, even if it's large and reputable and the salesmen have nice suits, that promises to solve your DoS problem by selling you an expensive firewall. If your traffic is below 100Mbps, there's nothing the fancy box can do that your OpenBSD box can't. (this changes as your traffic climbs significantly above 100Mbps, but most of us little guys are below that line.)


I'd go a bit further and say that even over 100Mbps you're not going to get much protection over an OpenBSD box or a Sun system running a decent Firewall.

The key thing is to try to block it upstream. Your firewall (and indeed anyone's firewall) is only really designed to restrict IP-layer access to specific source and destination hosts, ports and datagram types. Unless it was built by Cisco, in which case it was probably designed to do a whole load of other stuff, none of which will help you right now.

There are limited things you can do to the IP stacks of most OSes these days that can help discard unsolicited packets, and reduce resource consumption on the local boxes (if that's high) but an upstream solution is likely to be more optimal in most cases.


The big boys seem to be a little reluctant to publish information on their DDoS strategies. (If anyone has information that says otherwise, it's much appreciated).

I spent some time googling around and in case anyone's interested, these links seem to cover most of the mitigation patterns.

A slideshow covering common mitigation techniques, http://www.slideshare.net/intruguard/10-ddos-mitigation-tech...

A Cisco whitepaper (which, as usual, has en emphasis on Cisco kit and a long URL), http://www.cisco.com/en/US/prod/collateral/vpndevc/ps5879/ps...

A fairly comprehensive PDF document - A Survey of Active and Passive Defence Mechanisms against DDoS Attacks

http://www.fbi.cqu.edu.au/FCWViewer/getFile.do?id=17921


I don't know how secret this stuff is; I think if you poke around NANOG, you'll find a lot of material. If there's something that makes DDoS mitigation mysterious, it's how ad hoc all this stuff is.

What I've had firsthand experience with is:

* Tier-1's profile all their traffic, either directly or with flow export, and will get alerts if they see spikes to certain netblocks, or spikes to specific /32s with specific characteristics.

* Inside a Tier-1, the third-tier support people usually have additional monitoring they can enable for customers if an incident has been escalated to them. They get hundreds of these calls a week.

* A lot of Tier-1's can quickly reroute traffic to a specific /32 to run through special-purpose filtering setups, which might be a DDoS box like a Cisco/Riverhead, or an IPS like TippingPoint, or even just a Cat with lots of TCAM space set aside for filtering.

* A lot of Tier-1's --- maybe all at this point --- have some mechanism set up to share signatures of attacks so they can push filtering further upstream. When I watched that stuff happening, the sense I got of it was that this was really reserved for things like global botnet C&C.

What I'd add to the discussion on products is, most of what's built to combat DDoS is really only useful if you sell transit. If you're a Fortune 500, I know there are ISPs where you can get Cisco or Arbor gear deployed on the head end specially for you. But that's a Fortune 500, not 37 Signals.


The big boys seem to be a little reluctant to publish information on their DDoS strategies.

Makes sense, the bad guys are just as likely (if not more) to read up about them.


I've had a few DDOS attacks in the past, first when hosting with slicehost, and then when at linode. Running IRC related stuff, these attacks are pretty much expected from time to time.

Both dealt with them fantastically, keeping me updated on progress etc.

As far as I remember, there was only one instance when they decided (slicehost) to take my server 'offline' and just wait for it to pass/upstream providers to deal etc.

The other times they were quickly able to escalate to upstream providers with minimal impact. I've had a few at linode, and whilst it does use up some bandwidth allowance, it hasn't been a big deal at all.

Obviously if you're able to, get more IPs, use a few data centers, etc. distribute yourself so that an attack on any one part doesn't have a big impact.


Move yourself into the cloud and let someone else deal with it. Seriously.

I'm a big fan of outsourcing anything you arent 'great' at. In this case, its running firewalls. Unless of course your app is something related to server maintenance or firewalls or security, etc.

Most software companies really shouldnt try to deal with this stuff -- I had to learn the hard way. You should only spend your time on your applications, not anything else that doesnt directly benefit you. You can get lost in the maintenance.


Just curious, but how would "the cloud" help with such a problem? Sure, supposedly you could summon infinite computing power to handle the DDOS requests. But the computing power would not come for free, so the DDOS would still do serious financial damage.


You are paying either way. Thats just the way it works.

However, under one option, you pay and Amazon deals with the actual load, downtime, server stress, hardware improvements, etc.

Under the other, you get to deal with all of that AND get to be pissed off AND accomplish nothing for your business at the same time.


I'm curious, anyone has experience dealing with DDOS attacks on EC2? It does seem like a good option...


If you're using an instance as a load balancer it's limited to 1 Gbps.


There are several companies that create DDoS mitigation hardware/software. I have good knowledge that Amazon (et al) use several of these companies for solutions.


If they are never connecting, then why not set up a SYN "honeypot"? As soon as a valid user connects, redirect them to a different site or IP address that you do not advertise, that actually hosts your app and db servers.

Thus you could rent or colo a small system on a different network, maybe even in a different part of the country and then redirect valid users to an entirely different location with a cookie in the URL. No cookie no service.


The attacker would simply move the syn-flood to where the candy is.


If they have already attempted to extort $$. You should keep the mitigation device in place. There are a few devices that do packet inspection and can handle multi gigabyte size attacks.But like many have mentioned contact the upstream provider


your provider (or your network guy, possibly you) needs to blackhole the source of these attacks (preferably notifying the ISPs in question.) most of the better co-location providers will give you a way to blackhole traffic from certain IPs. Really, you should be able to do that from your OpenBSD firewalls without help from anyone else, assuming the problem is too many syn packets rather than too much traffic. dropping all packets from a particular source IP is not expensive, even when you multiply that by a large number of source IPs.

<please ignore this comment, I'm completely wrong. see the next comment.>


Black listing the source IP's is NOT going to help. The attacker can just spoof any IP, he doesn't care about actually connecting he just wants to tie-up your server

http://en.wikipedia.org/wiki/SYN_flood


depends on how smart your attacker is. the source IPs must not respond or the attack won't work, so potentially you could block enough bogons. Also the first syn flood attacks came from a fixed source IP.

You have OpenBSD firewalls... have you checked out OpenBSD SynProxy? I haven't used it, but it was designed for just your case.

http://www.openbsd.org/faq/pf/filter.html#synproxy


Makes you wonder why ISPs are not discarding packets with spoofed addresses, instead of letting them escape into the greater internet. Is it too expensive to do? My knowledge is a bit rusty, but isn't this a common router function?


In a likely worst case, it doesn't matter, because the traffic is coming from a botnet and could be entirely legitimate from an IPv4 perspective.


Can you please expand? Not sure I understand. Also read my post a bit lower down where I attempted to clarify my question. Would appreciate your perspective.


i.e. if you've got a botnet, you can still syn flood a host, and simply not respond with an ack.

In such an event all of the bots on the botnet have valid IPs and are representing themselves correctly, it's just that they're not behaving as they should with regard to the TCP handshake cycle.

So i gather that your suggestion would cut down on ip spoofing syn floods (but maybe not, i don't really know the full story :) ), but that wouldn't stop botnets.


Ah, I see. But if we ever manage to get everyone behaving properly at the IPv4 level (i.e no spoofing), it opens up a whole new realm of possibilities for managing attacks, especially DDOS floods: at the very least, ISPs would be able to notify each other of attacks originating within their networks, possibly even with some automation over a secure protocol, to automatically block packets at the source. But I could be missing something, seeing as it's 3 in the morning... and this needs to be modeled to see if it's actually viable.


ISPs can already do that. I think you're underestimating how frequent these attacks are; hundreds per week. Nobody can staff against that.


If an attacker can not change it's IP, it is easy for the person under attack to automate the identification of an attacker. In the case of a syn flood (that is, if the attacker is sending an amount of traffic that overloads your box or application rather than your network) it's simple enough to automatically block attacking IPs. The reason why this doesn't work now is that the attackers are usually spoofing the source.


it's easy to track down an attacker if they do not spoof their IP, at least to track it to the network it came from. Its something the victim can do.

tracking down the network a spoofed packet came from can be quite a bit more difficult; If you have access to all your routers, you can track it to the peer or upstream the traffic is coming from, but it might be a peer of a peer of a peer (or a customer of a customer of an upstream of an upstream) requiring quite a lot of cooperation to track down.


Don't forget the first "D" in "DDoS." The Storm botnet, at its peak, had estimates between 1 and 10 million computers. Many, many more botnets have between 100,000 and 1,000,000. It would be far easier to get ISPs to cooperate with a backscatter trace on one or a few computers spoofing random IPs than to track and shut down hundreds of thousands of computers.


That's why ISPs automate this process, with homebrew tools and by instrumenting their backbones (with things like 1:1000 netflow).


There's no way to know the address is spoofed without replying to it. It's the same idea as "spoofing" the return address on a letter.


Sorry, that's not what I meant, and I may have forgotten some terminology, so let me attempt to make myself clearer: The first edge router that a packet encounters, coming from the source, knows what the valid addresses are. The router simply needs to ask itself: "If I were to receive a return packet destined for the claimed return address, would I know what to do with it?"

If invalid ones were dropped, then you can at most spoof another address in the same network. But either way, it can be traced back to the correct edge router.

I'm looking for someone experienced in these matters to explain why this theory isn't / can't be put into practice.


It's called BCP38 and decent ISPs and competent network admins do it. I do it on the port level for all my IPv4 addresses (that is, all packets leaving a xen vps of mine are dropped unless they come from the address I assigned them.) and I will do it for IPv6 when I build out the routing infrastructure for my new location.

It's really pretty easy to do... especially just edge level like you are talking about. The problem is that it costs me some time/money (even if it's only a little) to do it, and it mostly benefits other people. So some ISPs still don't do it.

http://www.faqs.org/rfcs/bcp/bcp38.html

but it's really easy. on my router I look at all outgoing packets, and I ask "Does this IP have a source address that is reasonable? (one of my IPs, or an IP of someone for whom I am carrying traffic.)" If my router sees a outgoing packet with a source address not associated with me, it's obviously spoofed, so it drops the packet. If everyone did this, spoofing would only be possible within your own network.

Every time an ISP does this, the world becomes a little better for all of us. But spoofing will still be a problem until all ISPs do this, and that probably won't happen for a while.

(you also do this to incoming packets, if it is an incoming packet coming in from the internet and it has a source of one of my IP addresses, something has obviously gone horribly wrong. drop the packet. but this only protects you from the most obvious spoofs. It is very important if you do IP based security.)


Thanks, exactly what I was looking for. Now, if only we could start a movement to only use BCP38 compliant ISPs. And then convince our ISPs to reject all packets from other ISPs that have been shown to be non-compliant, in case any survived.

EDIT: The way I put it may be a bit extreme, but the idea is there.


I'm not sure it's too extreme. take it to NANOG. We'd need the support of big players, much more than one-and-a-half rack operations like me. (though bcp38 compliance is much more common amongst the big players. Heck, the provider I'm moving away from doesn't do it, something I didn't know before I signed a contract. For that matter, I only do it on outgoing IPv4. I don't do it on incoming packets or IPv6. This will be corrected in the network upgrade that is in progress now, but still.)

The big problem with your proposal is verification. It seems... difficult to verify that another network properly implemented BCP38 without actually putting a probe on that network.


this guy is suggesting that the mailman check the return address when he picks up the mail. It's actually a good idea (and fairly common amongst competent ISPs.)

the wikipedia explains it well, I think:

http://en.wikipedia.org/wiki/Ingress_filtering


I am by no means an expert, but shouldn't it be possible for the source's ISP to tell whether the source address is spoofed?

edit: Looks like the OP managed a better explanation than I could since I loaded the page. Please disregard.


As far as I'm aware (Correct me if I'm wrong anyone in the know), the process for any spoofed source address flood would be:

  * Determine which upstream provider it's coming from
  * Contact that provider, and ask them to investigate
  * They do the same thing
  * If a provider finds out the source is local, then they block it / contact the person(s) responsible etc
Of course it gets harder if it's a botnet or some other system that is distributed over a large number of sources.


I think that, if we're talking about the level of traffic that I think we're talking about here, it would almost HAVE to be a botnet.

either that, or somebody at one of the bandwidth providers got really <i>really</i> pissed off at you...if the later is the case the guy (or girl...they can be vindictive as hell) has probably already been fired since, you know, they sortof monitor these sorts of things.


Yea its a botnet and the traffic is spoofed. we never get the same IP twice. Unfortunately it's so much traffic that the firewalls just collapse. Even if you blackhole all traffic they still jump into the blender as soon as they acquire the main ip.


If it's all spoofed, you (meaning AT&T or Verizon) can stop it with a SYN proxy.


That's not strictly true. On the very high end, in products sold pretty much only to ISPs, you can get source address filtering for hundreds of thousands of sources for established connections, and you can get SYN proxying to have the head-end complete the 3WH before your downstream connection ever sees those SYNs.


From my experience at ISPs

they tend to be very cost conscious, and far more likely to use a NIX box to do this sort of thing. ISPs are generally competing in a comparatively low-margin market. Usually it's the large corporations with less NIX knowledge (and someone else's money to spend) where I see the really high end firewall/proxy/load balancer gear.


Your experience conflicts sharply with mine; maybe you're thinking about a different tier of ISP.


openbsd has synproxy

http://www.openbsd.org/faq/pf/filter.html#synproxy

and urpf

http://www.openbsd.org/faq/pf/filter.html#urpf

which sounds a lot like the features you describe.


Yes, now do that at several million packets per second.


ah. sorry. I was under the impression that this was a sub 100Mbps attack.


This works fine when there are tens or even a few hundred source IPs. However, malicious DDoS attacks are often orchestrated using zombie machines (maybe your parent's unpatched Windows box, for example). This way the perpetrators remain out of reach and you end up swamped with thousands upon thousands of source IPs, none (or few) of which share a net block.

This is really not a problem you can solve down at the firewall-before-webserver level.


the response to my post was saying I was wrong (and is right in saying I was wrong) because the attacker is spoofing the source address. This makes it a harder problem to solve than zombied PCs (the case of zombied PCs sending unwanted traffic can be blocked by a firewall or other means, you just need an automated method of identifying the boxes you want to block.)


Firewalls aren't DDoS mitigation devices, they're staeful policy-enforcement devices. DDoS attacks are attacks against capacity and/or state - firewalls must be protected from DDoS just like hosts (even more so, in fact).

Implement iACLs, uRPF, and S/RTBH at your edges, and work with your SP on a reaponse plan.

And take your server out from behind the firewall. Stateful inspection makes no sense at all on a front-end server, where every connection is by definition unsolicited. Harden the OS, harden the apps/services, run a chrooted jail, use tcpwrappers and mod_security and mod_evasive, and use stateless ACLs in an ASIC-based router to enforce access policies.

By placing the server behind the firewall, you increase its vulnerability due to the potential for exhaustion of the connection table by an attacker. You can use firewalls between the tiers of a multi-tier setup, where you can control the number and types of inbound connections on a bidirectional basis, but no one who operates high-volume publicly-accessible servers puts the the front-end behind a firewall, because it does nothing to increase the security posture, and can actually be harmful.


<meta> now, this is an interesting part of allowing edits. I could have changed my post entirely to make it look like I was right all along and the guy below was just ranting. It seems like the 'edit' button needs to have a wikipedia like 'view changes' mode so people can see how your position changed, or maybe the 'edit' mode should be disabled after someone responds to your comment, to preserve the flow of conversation. </meta>




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: