Nice. I was really hoping it protected me from a very different kind of "abusive client" though. I guess there are somethings that even in ruby you can't do easily.
iptables can limit the number of connections per ip in a "cheap" (fast/early) way. In fact is my #1 use of iptables since blocking ports where there are no services doesn't do much.
A much more common use of iptables for me is to limit outside traffic to port 21, 80, and 447, while letting whitelisted internal hosts use every other ports, for other services used internally. We can then run those services without authentication, and have much less exposure in case of an attack (the only thing whose security we must trust is iptables, ssh, our HTTP web server.)
I also like nginx's limit_zone module. You can put limit_zone in the proxy stanza, and only throttle dynamic requests without throttling access to fast static files.
We often use Rack::Attack to throttle particular HTTP paths differently. Say, the homepage isn't throttled, but the login action is. That layer 7 knowledge is Rack::Attack's main advantage.
As I say in the README, Rack::Attack is complementary to iptables and the limit_zone module.
| blocking ports where there are no services doesn't
| do much
True, but it can be a useful 'just in-case' against things listening on ports that you were unaware of. It's obviously bad for you not to know about services that are listening on your box, but you could view it as a safety net.
Maybe I'm missing something, but this seem like something that would only be useful in situations where you don't have access to anything "closer" to the network requests (router, firewall, webserver) that you can tweak to handle these types of things.
It allows whitelisting... based on arbitrary properties of the request.
So if your user authentication code was also a Rack middleware, and you inserted Rack::Attack after it in the middleware stack, you could rate limit based on user account as well as IP address. That would be harder to do at the firewall or web server level.
This isn't for preventing DOS attacks (for which you'd want to completely avoid hitting application code), it's just for preventing unauthorised or excessive usage.
Think of it as defense-in-depth. This allows higher-level, but more sophisticated rules, while your lower layers provide simpler but lower overhead filtering. Hopefully abusive requests never reach this thanks to your router / firewall / web server rules, but if they do, this will help keep things in check.
Most times your firewall and router aren't doing layer 7/application level inspection/actioning. If Rack::Attack can handle it efficiently, its the easy way to go.
I used fail2ban to block abusive ips (based on string matching of specific errors in our logs). This seems like an interesting alternative though to keep things under one roof.
if you're behind a reverse proxy (or load balancer, etc.) one should normally have [firewall] rules to ensure that only these proxy hosts can even connect to your httpd.
you also can configure your edge proxies to ignore X-Forwarded-For, or at least move it to another untrusted header if you want to preserve its contents.
there's an nginx module (has to be compiled in) that lets you whitelist hosts which can send X-Forwarded-For, and turns that into the actual remote address provided to your upstreams.