Hacker News new | past | comments | ask | show | jobs | submit login

This can be done either way, and is really infrastructure dependent. There are pros and cons on both sides. We opted to bake it into the web stack instead of in front so that we get all the benefits of existing infrastructure like log aggregation, exception tracing, HTTP error response formatting and tracking, user-specific gating, auto service scaling, etc.



Without entirely ruling out the opinion, you can take any advice that such logic must take place in a reverse proxy in front of the app with a grain of salt. It's one thing to place certain anti-DOS/DDOS logic in front of the app, but basic rate limiting that depends on inspection of each request (including cache/database lookups for per-user limit increases) is certainly something that is very commonly found at the application middleware layer (after user authentication and route parsing, but before the controller).

Rate limiting is not meant to be a hard defence against denial of service attacks - that is an entirely separate engineering problem, where even a reverse proxy is not enough protection when you're facing a network-level flood. The primary purpose of rate limiting isn't to prevent hitting the application at all - it's to prevent hitting the heavy logic that starts when the controller is invoked. As long as your application's bootstrapping layer is lightweight, there is no problem with leaving rate limiting as part of the application.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: