Hacker News new | past | comments | ask | show | jobs | submit login

I dispute the notion that Slowloris's behavior should be considered an "attack". If your server is hammered by tens of thousands of visitors who are on slow links - say modems or 2G wireless links - then you will have the same problem. The real problem if you ask me is the limited amount of I/O concurrency that multi-threaded and multi-process servers can have. Pretty much the only practical solution for this right now is by using evented I/O.

The problem can also be solved by putting the web server behind an evented, buffering reverse proxy and fully buffers both the request and the response. This shields Apache from slow requests.

Maybe some time in the future operating systems will be able to handle millions of operating system threads easily but we're not there yet.




The default thread count for Apache is 256 is it not? I know we can't handle millions (and I can't speak for how Apache itself with threads) but servers running thousands of threads are not only possible, but exist and work well. Whichever way you do it (threads or events) at some point you will hit a limit.

I see Slowloris as an "attack amplifier" - it's a way to do a lot more damage with N connections than a straightforward DDOS. In some situations, it's possible to deny access to a server with just just one.


> The real problem if you ask me is the limited amount of I/O concurrency that multi-threaded and multi-process servers can have. Pretty much the only practical solution for this right now is by using evented I/O.

Why do you think that? What constraints do you believe limit the number of threads you can use?

- An idle thread is essentially free.

- The memory used by an ongoing request should not be inherently different between an evented server and a threaded server.

- The number of sockets an evented server can use is the same as the number of sockets available to a threaded server.

If evented servers are handling this better, all it shows in my opinion is that the threaded servers have been written incorrectly or else configured incorrectly. But that is not an inherent benefit of evented servers. Please correct me if there is a constraint I am missing.


> Why do you think that? What constraints do you believe limit the number of threads you can use?

Virtual memory address and context switching overhead. On 32-bit platforms, if each thread has an 8 MB stack then after creating a few hundred threads you run out of VM address even if you don't actually use that much memory. Most OS schedulers also don't like dealing with tens of thousands of threads. Furthermore each kernel thread takes a small amount of space but kernel memory typically isn't swappable, unlike userspace memory.


By context switching overhead, I assume you mean the overhead of faulting data into the cache from a thread that had been sleeping. Evented systems have this overhead too when switching between events related to different requests.

Stack size I've discussed in another response. It can and should be decreased if you plan on working with a lot of threads.

I can't comment on the Mac OS X scheduler. The Linux scheduler handles it just fine, and my understanding is that the Windows one does OK with it too.

The small amount of kernel memory that isn't swappable likely isn't your system's overall throughput constraint, but if it is I stand corrected.


One constraint I believe you may have missed is that an idle thread does actually consume a significant amount of memory. While most resources a thread consumes are fairly small, the stack is actually quite large and is allocated up front. The usual default ulimit stack size for Linux is 8192Kb. This means for every thread created, 8 megabytes is allocated whether or not it's used. On Linux the OOM killer will start to kick in, by default, once you've overcommited your virtual memory by 50%. You can adjust the ulimit stack size and overcommit ratio, of course, but either way you're playing with fire.


Two problems with your assertion:

- 8MB of virtual memory is allocated. In practice, only a page or two will be allocated to hold the base of the stack, and of course any memory required for the request's data.

- You can decrease the default stack size, and would if you were trying to work with a lot of threads.


(Disclaimer: my assertions apply only to Linux, I'm not familiar with the vmm in other systems). I think you may be misunderstanding how the vmm works. When you allocate memory by calling mmap(), sbrk() etc., that amount of memory is committed whether or not you actually use it. Because many processes will allocate more than what they actually use, for example excess stack space, the vmm allows you to overcommit by a certain percentage, defaulting to 50%. This will allow processes to allocate 50% more memory than the system actually has. If that is exceeded, however, Linux has an Out Of Memory Killer which will start killing processes.


Say many Slowloris clients attack at once. There reaches a point where threads are much less free than events. Newer concurrency systems will handle thousands or millions of lightweight processes without firing up large numbers of native threads, and there's a reason for that design decision.


> There reaches a point where threads are much less free than events.

Why?

> Newer concurrency systems will handle thousands or millions of lightweight processes without firing up large numbers of native threads, and there's a reason for that design decision.

I would say that is based on the historically accurate but currently false notion that OS threads are expensive to start up or keep idle.


It's still true -- I recommend looking at some comparisons. Worker-thread based http servers consistently have worse performance at handling a large number of connections, and the error rate for those (simply) using threads is also higher at large number of connections,

For example, see this benchmark of Python web servers: http://nichol.as/benchmark-of-python-web-servers




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: