Hacker News new | past | comments | ask | show | jobs | submit login
Scaling your API with rate limiters (stripe.com)
351 points by edwinwee on March 30, 2017 | hide | past | favorite | 82 comments



Hi! I'm the author of the post and am happy to answer any questions you have.

There is also a corresponding set of code examples at the bottom that might be of interest to you: https://gist.github.com/ptarjan/e38f45f2dfe601419ca3af937fff...


The code examples are written in ruby and lua - I take it that in your live environment you use the lua script compiled into nginx?

Does this mean every rate limited request is hitting redis?

I had this problem years ago, and I used varnish to offload the rate limited traffic, which scaled very well then. Here is a blog post I wrote about it: http://blog.dansingerman.com/post/4604532761/how-to-block-ra...

(it was written directly in response to a 'One of your users has a misbehaving script which is accidentally sending you a lot of requests' issue)


We actually use the LUA scripts on Redis to guarantee the atomicity that the token bucket algorithm needs. Then the Ruby middleware does the HTTP 429 response logic. The code samples I provided are pretty close to what we use in production.

So yes, our Redis traffic is strictly higher than our whole API requests per second. This is actually pretty easy since Redis scales pretty well horizontally (the keys shard well) in addition to being able to handle many orders of magnitude more traffic than a normal web stack per machine.

Thanks for linking to your blog post. I like the simplicity of not having to enter the web stack at all. I'm not very familiar with varnish, how do you end up doing the actual rate limiting algorithm? I see you making a hash but not dripping tokens into a bucket.


The rate limiting was done by the application server, but was completely separate from the varnish layer. (I think it was something fairly naive to do with memcached, but that's not terribly important). What was important was that the application server could serve the 400* with an expiry, and varnish would deal with that load until the expiry.

*This was 2011. The 429 response code wasn't defined until 2012.


That's an incredibly simple and elegant solution, and it's obvious how simple it is to scale it. Thanks for sharing!


A few questions. Have you looked at doing batch token assignment or "leases" from the centralized source to the workers? Ie request 10 tokens from the central "bucket" when the local "bucket" reaches zero. You can dramatically reduce the number of remote operations, reducing syncronization latency and cost. Scaling the batch size based on past request rate is a nice further optimization.

Similarly Im not sure about your "atomicity requirements." Is percieved accuracy driving the centralized store and single token semantics? Time skew alone is going to drive 0.1-1% innaccuracy in your rate limiters unless youre in crazy town doing 1pps or ptp everywhere.

Some food for thought is consolidating "rate" and "concurrent" limitations. Youll frequently see the same dichotomy in pps and bps based limits. If you can derive a consistent unit of work you can use a single bucket. Ie 10 requests that take 1 cpu second each have the same "cost" as 1 request at 10 cpu seconds. In the macro queue theory has them as the same impact to the system.

You "fleet usage" case, and a few other comments, reads like a simulacrum of weighted queues. Expanding it to weighted priority queues is a really powerful abstraction. Imagine requests which are over their base rate limit are not discarded, but actually marked as the lowest priority. That allows you to do generalized strategies like rejecting requests (load shedding) based on queue depth (latency) and FIFO/FILO.

PS: everyone reading _please_ respond with a 400 series if the request fails due to client request properties like rate. It really really really sucks to try and guess if 503 means "I" was rate limited and should back off, or "you" failed and the request should be immediately retried.

PPS A tactic to consider is returning the estimated recharge time when a request is throttled. A good client can use that to intelligently back of and achieve homeostasis with your server resources.


Too late to edit, but thought I should clarify. Nice introductory post and demo code for the concept. I wrote the parent in a hurry after work. I meant it as a mix of earnest questions and comments that came to me as I read through. Apologies if it came across as unnecessarily challenging.


Thanks for including real numbers, e.g. "Only 100 requests were rejected this month from this rate limiter". Always interesting to see how much a particular trapdoor actually had to be opened.


I find it also helpful to see the number is above zero and the code paths are actually exercised regularly, not just in rare failure scenarios.


This is an incredibly well written, helpful article. Thank you! I love that you included the Gist as well to show some real code, and that you mentioned how often each type of rate limiting is triggered. This is like the best example ever of how to write a tech blog post!


Thanks so much. I'm really glad you liked it.


I'm curious what your thoughts are on implementing this via a reverse proxy in front of your services. In terms of implementation it appears to me much simpler. But it obviously comes at the cost of maintaining the proxy if you are, say, already using ELBs.


Before I read this article (having just seen the title) I assumed this was going to be covering a reverse proxy system, and started pondering how I would implement something like this myself. A very basic approach probably would be simpler than middleware, but the question I got stuck on was whether or not a reverse proxy would need to be "smart" about the content it was limiting. That is, would it need to be programmed to parse requests and understand what the intention and purpose was of each request?

For example, limiting traffic by IP address via reverse proxy is simple, but it seems like it would be more difficult to limit by request priority.

I was surprised when the article revealed it was middleware, and suddenly that made a lot more sense and seemed easier because it no-longer requires duplicating application logic to understand the content of requests. Middleware definitely seems like the better approach to me after these considerations.

What kinds of methods would one use to solve the problem of needing to parse and understand incoming requests using the reverse proxy method?


From the bit I know, I think there are at least some options available. For example, in nginx I believe you can use the "map" module to mix and match different components of the request into your limiting.

From what I saw, HAProxy appears to be even more powerful. Its ACL concepts are completely able to create rules based on headers, IPs in the request, etc. and you can compose them into larger ACLs.

With the example of request priority, if you can determine the priority by it's URL or a header, let's say, you can achieve this with nginx. But if you need to look the user up in a DB and see how much they're paying you, you obviously have to do it in the application.


This can be done either way, and is really infrastructure dependent. There are pros and cons on both sides. We opted to bake it into the web stack instead of in front so that we get all the benefits of existing infrastructure like log aggregation, exception tracing, HTTP error response formatting and tracking, user-specific gating, auto service scaling, etc.


Without entirely ruling out the opinion, you can take any advice that such logic must take place in a reverse proxy in front of the app with a grain of salt. It's one thing to place certain anti-DOS/DDOS logic in front of the app, but basic rate limiting that depends on inspection of each request (including cache/database lookups for per-user limit increases) is certainly something that is very commonly found at the application middleware layer (after user authentication and route parsing, but before the controller).

Rate limiting is not meant to be a hard defence against denial of service attacks - that is an entirely separate engineering problem, where even a reverse proxy is not enough protection when you're facing a network-level flood. The primary purpose of rate limiting isn't to prevent hitting the application at all - it's to prevent hitting the heavy logic that starts when the controller is invoked. As long as your application's bootstrapping layer is lightweight, there is no problem with leaving rate limiting as part of the application.


imo, this sort of middleware (api management) should always be handled by a reverse proxy


Could anyone give me a quick overview of how that would work? Or what it would simplify? (Just curious!)


I have been working on this recently, using nginx as the reverse proxy. Nginx makes it pretty trivial to limit requests based on various factors, like request rate, number of connections, etc. A post like https://lincolnloop.com/blog/rate-limiting-nginx/ has snippets that show how it works.

But the simplification is that you do not need to write any application code. I can very easily have certain routes or domains limited differently by updating the nginx config. Especially in a microservices world I am trying to avoid having to update N services to get all of them to be rate limited.


Here is [1] the source code to a rate-limiting middleware written for the Caddy webserver if you're into reading Go code. Should be a good sample.

The whole thing is 27 unique files, 1068 lines of Go, according to cloc.

[1]: https://github.com/xuqingfeng/caddy-rate-limit/blob/master/c...


is token bucket the best algorithm for this? why not ring buffered timestamps?

ex: https://github.com/blendlabs/go-util/blob/master/collections...


Both should work. Token buckets only have to store two things (count and timestamp) where a timestamp set has to store all N timestamps. I like the simper approach since I don't actually need access to all the timestamps.

If you look at the concurrent request limiter I do indeed keep all the timestamps there in a Redis set. That one was more error prone to write in practice as I often would accidentally hit Redis storage limits.


~previous statement was incorrect~


In a token bucket algorithm you don't actually have a separate replenish step, it is baked into the next check you do. Similar to how in the example you linked the removal of the old entries is baked into the check step.

Check out my example: https://gist.github.com/ptarjan/e38f45f2dfe601419ca3af937fff... on line 23 with

    local filled_tokens = math.min(capacity, last_tokens+(delta*rate))
that adds in any tokens that should drip into the bucket since the last check.


sorry was thinking of something else.

yeah the algo is pretty standard, what we found is some edge cases get super weird, namely if you check on regular intervals you'll get false positives, vs. bursty calls

https://play.golang.org/p/Ujp7yeFQ3L


There is no replenishing process, its just a conceptual thing. In code one just computes time difference and then multiplies that with fill rate to determine tokens to fill before deducting any. Token buckets are popular because they work well, require fixed size storage and are extremely simple to implement and test.


I've put fair queuing into an web API. Requests queue by IP address; if you have a request in the mill, any further requests are processed behind those from other IP addresses. This handles single-source overloads very well. It doesn't require tuning; there are no fixed time constants or rate limits. For about a month, someone was pounding on the system with thousands of requests per minute. This caused no problems to other users. I didn't notice for several days.

One would think that this would be a standard feature in web servers.


This is an interesting approach to request rate limiting - the simplicity and lack of tuning is definitely appealing.

Did you do anything to mitigate the scenario where multiple users are behind the same IP address? With this approach I would worry about locking out all users in a NAT when a single user misbehaves.

It's also scary to me to process something "last" - under high enough load you might never get back to the (potentially briefly) abusive user? Did you attempt to guarantee some minimum passthrough rate even for misbehaving users?


As with fair queuing in routers [1], you have to avoid infinite overtaking. If someone has have a request in progress, their other requests are stuck behind that one until the request in progress is serviced. Then your next request is eligible to be processed.

[1] https://tools.ietf.org/html/rfc970


This is a great idea. And unlike rate limits per user, you should get better utilization of the web server. A drawback is request latencies and queue sizes could get out of control without a feedback mechanism to the client. I would probably want some active queue management to return a 429 if something stays in the queue too long. You could also put a single rate limiter in front of the web server, to prevent server overload.


I return HTTP error 429 (Too Many Requests) when a single IP address has 100 requests queued. Anybody who gets into that situation is using the API incorrectly.


Where in your application stack did you implement this?

What were the hard kinks you had to figure out when coding this system?

I'm a fellow developer, would be interested in hearing you describe your system in more detail.


The queue is in a MySQL in-memory table, of all things. The requests are slow (they take tens of seconds, because they go out and scan a web site), so this is feasible. For smaller requests, you'd need a more direct mechanism.


So you have to hash by IP on your load balancer? You can't do a round Robin hash. That seems to be the downside i can think of.


We do the same but instead of using redis we built our own service for it currently on top of leveldb instead of using redis.

This allow us to define the parameters of the token bucket on the service instead of the application like:

  'requests':
    per_second: 100
    size: 200
Then in the app is like

  conformant = limitd.take('request', ip, 1)
I found the token bucket alg to be useful in a lot of scenarios not only for rate limiting but also for any kind of event debouncing. A common example: lets say you want to email a user everytime they trigger some condition on the system but you dont want to send the same mail more than once a day.

Our project is opensource, we still working on it:

https://github.com/auth0/limitd


Many engineers discover a need of rate limiters the hard way :-).

I just wonder, do you use rate limiters just for external API or also for internal API of your microservices?


We do have rate limits on some of our more crucial services (we don't use microservices per se, more like macroservices).

One difference for internal rate limits is we set alerts whenever they fire so that we can track down the client and see what they were doing. It is much easier to do that when you own both sides of the pipe.


I think since it's internal services, you control both the client and server side. Applying back-pressure would be a good alternative than just using rate limiter on the server end and let your client just flood away your internal service interfaces.


A big percentage of our internal services are near-real-time asynchronous event consumers. Writing a consumer is not all that different than writing a service. For events and consumers we have mechanisms that serve similar functions as API rate limiting, but they have a different shape.

So the answer is probably a yes?


Another way to add a fast, distributed rate-limiting on top of your APIs is by using an API Gateway [1] like the open-source Kong [2]. This pattern is becoming a standard since it saves time from re-implementing and duplicating the same rate limiting logic in each backend microservice.

[1] http://microservices.io/patterns/apigateway.html

[2] https://getkong.org/plugins/rate-limiting/


Yeah, worth mentioning https://tyk.io too.

Also, do not forget about Quotas which usually comes along with Rate limits. Modern API gateways can handle so many stuff for you and help with API scaling.


FYI: Redis now has a module that does rate limiting for you: https://github.com/brandur/redis-cell


Can't help but put a link to https://www.ratelim.it in here. A rock solid distributed rate-limit was something I desperately missed when moving to a smaller company.

Primarily I use this for idempotency and throttling things like usage events. But you can also use it for locking and concurrency control.


Reminds me of a website I built once (more like a side project) where I anticipated heavy loads and all sorts of nefarious users / bad situations so I spent a good chunk of time implementing and testing rate limiters into most of the critical requests.

I ended up getting maybe... 3 or 4 well-behaved visitors / day for the first two years.


Great article that I expect will assist many teams that are facing scaling issues.

I thought it appropriate to plug my java based rate limiting library that implements the token bucket algorithm as mentioned in the article

https://github.com/mokies/ratelimitj


Very interesting, we have throttling for our customer facing API built with PHP and found this library very useful: https://github.com/sunspikes/php-ratelimiter#throttler-types


Ignorant guy here:

Wondering how the two words "scale" and "limit" go together. My only experience right now with someone's API and rate limiting is Cloudinary. They give you 500 requests/hr (free). Which can be a lot or nothing at all.

No sorry it does make sense, don't allow one person to exhaust/consume your resources.

What about using Go? I hear crazy stuff like going from 2000 servers to 2, and doing 25,000 requests per second. Or is this a bandwidth concern?


How would you handle the problem where one API user would exhaust all your resources and your servers don't melt? Other than spending an ever increasing amount of money expanding capacity you would need rate limiting. What if the API call is computationally expensive ? What if someone is trying to maliciously use your API?


Yeah I'm not disagreeing with the author here. It didn't occur to me immediately what was being done. Scaling and limiting seems like antonyms but I get the point. Especially if it's free (the API access). Even premium/paid would need limits.

For the case here, where the client insisted on keeping the 500/hr limit, I cached the query-results with a database as the values don't really change. But if they did change in the future that could be a problem, so you'd have to update the "cache".

Regarding handling overflow, that's something I haven't done yet myself, still stuck in the LAMP days, and have not done something like bench marking my server to see how many concurrent connections/requests it could handle.

note: LAMP isn't an excuse, I'm just making a note that I'm behind in using other technology that could be better. But I've heard of modular Apache configs/routing... stuff beyond me at this point.


Man I have yet to experience what people are talking about here. By that I mean actually have the traffic to worry about requests/sec. I'd be lucky to get 100 visitors per month haha, I don't really have anything. Good stuff to look forward to, glad to be able to learn from other people.


Well written. I liked the "Building rate limiters in practice" section and your advice on launching dark seems like a great idea.


Great post however does anyone know how rate limiting specific users is typically implemented? For instance if you have a SaaS API with multiple subscription plans you generally rate limit users based on tier e.g free tier: up to X number of requests, paid tier: unlimited number of requests, etc. I am assuming this is typically handled in the API itself.

Thanks in advance.


Thanks! The buckets are already done per-user so it is very simple to make the constant factors be user-dependent.

For my example in https://gist.github.com/ptarjan/e38f45f2dfe601419ca3af937fff... you would just set REPLENISH_RATE to be a different value for different users.


Thanks!


Bookmarking this so I can go back to it when we have similar issues which are a function of scale, I assume.


This article kind of makes me glad we decided to build our SaaS API on top of AWS API Gateway. At least I don't have to worry about the fine details of implementing rate limiting - just outsource that hard stuff to AWS and tuck our actual API endpoint behind their gateway.


Does Stripe really allow 1,000 reqs/sec, or was that just an example limit? Is that normal i.e. expected for an API service like Stripe to offer? I usually see rate limits of 5-25 reqs/sec, but maybe I haven't been looking at the right APIs?


Well Stripe is handling payments for other companies so any unserved/blocked request could cost their clients literal money/customers. Also their customers could suddenly get an influx of orders/customers following things like Superbowl commercials so their limits would need to be pretty high.

I'm guessing here but I'm guessing Stripe would actually only enforce these limits when they have some infrastructure issue or emergency and in most cases would allow all non-abusive uses of their API through.

Twitter and other such services could use much lower rate limits because its OK if a user is unable to post a new tweet for a few seconds.


Good point. That's what I figured--for example, a background job running end-of-month subscription charges for a large co. could potentially slam the API for a few hours.


Mousewheel doesn't work for me on this website


Could you send me an email with what browser and OS version you're using? edwin.wee@stripe.com


Works fine for me (Win10/Chrome)


I a bit torn here. I love Stripe, but I feel like this is awful advice for API-first companies.

If you are building a developer product where your API is your business, using rate-limiters is akin to preventing customers from giving you money. If your product is an API, you should encourage usage. The more usage you get, the more success both you and your customer will have. It's a win-win.

Because of this, I believe implementing rate-limiting strategies result in not only poor-DX for the product, but also a loss of trust (if these limits are in place to prevent your backend from failing, what other things do I need to worry about while using your product?), AND most importantly, they result in loss of business for both the API company and the customer.

IMO, if you're an API company, and you can't handle bursts of traffic from your customers, you should work on improving your backend and stop wasting time messing around with implementing patterns like this. It's a lose-lose situation for you and your customers.

I was really inspired by this recently after spending some time @ Twilio. Jeff (and the rest of the Twilio team) are hardcore about their API first product / thinking. They have a motto which is something along the lines of this: if you and your customer are both more successful and both make more money when the API usage goes up, do whatever you can to get out of the way and let the API be used as much as possible. I thought that was an awesome approach to take.


Sorry, but this is naive and bad advice. Rate limiting is a critically missing component in many service APIs, for more reasons than I can even comprehensively enumerate off the top of my head.

Scaling an API of any real value is NOT trivial, and struggling to scale an API to meet user demand does NOT necessarily mean that the backend was poorly designed. This is a naive generalization that is hazardous to the industry. Please don't spread it.

Here are some reasons why a lack of rate limiting / user auth is practically negligence. There are more, to be sure. I have experience operating a customer facing API for Bazaarvoice, so I think I know what I'm talking about. (We do thousands of requests per second and power reviews for the likes of Walmart, Best Buy, and 4,000 other retailers and brands worldwide.)

* Multi-tenancy * * client A over extends and causes client B to be unable to use the API * * client A needs scale independent of clients B-F * monitoring * * suddenly a client is making fewer than usual calls, why? * * suddenly a client is making more than usual calls, why? * billing * * want more requests / second? Upgrade your contract * * it's easier to measure how much I should charge customers per request or type of request, when I can see the rates of those requests and what it costs me * security * * DDOS attack? Start by setting the limit to nothing, or rejecting the requests * * leaking API auth info is less dangerous, if it happens

I think some other sibling comments mentioned other great reasons. The takeaway is that a valuable API will most likely be difficult, expensive or both difficult and expensive to scale, and rate limiting is extremely important.


Rate limiting, and specifically leaky bucket algorithms that spaces out requests evenly, rather than servicing them as they happens to come in was shown to improve the overall performance in a few systems I worked with.

Using a leaky bucket algorithm, and a per-customer bucket, I think it's possible to build "fair" systems that also improves the performance.

That is, you can run the system with a higher total transactions per seconds, just by queuing "simultaneous" requests a few milliseconds, as they will complete quicker.

The reason is probably that it's reducing contention and levels out the resource usage.

I thought it would be a feature in almost all web servers, since it's been known "since forever" in the telecom world, but I have not seen it. (Have not looked specifically either, so maybe there are good support for this everywhere and I missed it...)


Rate limiters are fantastic for companies and developers starting to integrate with a new API. There's great comfort in knowing - no matter how badly I screw up I won't end up with a $10k+ bill at the end of the month.

The way most smart companies do it is totally sufficient. Low but reasonable entry-level rate limit with a simple support contact to get it lifted. Maybe you're not aware but that's the exact model that Twilio uses[0].

[0] - https://support.twilio.com/hc/en-us/articles/223183648-Sendi...


I was not aware of this. Thank you.

I'm not against allowing the customer to set rate limits, etc. What annoys me in general are services where I want to use them and sometimes do burst traffic for a variety of reasons: but can't for an artificial reason :(


The reason is not artificial, though, that's the whole point.


IMO, if you're an API company, and you can't handle bursts of traffic from your customers, you should work on improving your backend and stop wasting time messing around with implementing patterns like this. It's a lose-lose situation for you and your customers.

Improving performance can be significantly more expensive and time-consuming than implementing simple rate limits. A realistic scenario is something along the lines of:

- 99% of time your request rate is N requests/min or lower

- 1% of time your request rate exceeds N requests/min, which could cause service degradation

You can deploy infrastructure to handle the 99% case, slap rate limits in front of the service, and sleep well at night. It's often not worth it to pay for additional infrastructure, spend time optimizing, etc for that 1% case.

As an API user, the way to think about rate limits is as a form of protection from other (misbehaving) users. Everyone is going to be upset when one user's script has a tight-loop firing at 100000 requests/sec and then the API becomes lethargic, throws errors, or goes down all together.


Request rate are accounted in requests/second.

If you are counting requests/minute, you're seriously averaging your peak load and you're about to epicly fail in production.


You're being downvoted but there is some truth to this.

For example, I inherited a relatively poorly performing service with a per-user limit set to 60 req/min. As the service operator, I have the choice of setting either a rate limit of 60 req/min or 1 req/sec. The former (60/min) leaves you open to per-user spikes of up to 60 req/sec. This is a real risk: 10 users together _could_ produce a spike of 600 req/sec.

Still, we went with the per-minute rate limit. Why? Those multi-user bursts are fairly improbable (most of our users were well-behaving) and queueing let us manage these isolated spikes gracefully. The per-minute rate limit is a bit more flexible from a user perspective (reward the good users) and it still combats the problem of sustained heavy load on the service, which is the true danger (stop/limit the "bad" users).


It was not talking about the rate limit but the performance of the applications.

Performances must never be accounted in request/min. It's only requests/s that matter, because your performances are defined by the peak load you can take (which is many times the minute average).

Limits should be on more than a few seconds to support short peaks, yet throttle quickly if they persist.


I think you might be missing the assumptions here. No service I've ever seen is 100% available. If you assume that things can (and will!) fail then you need to design for failure.

Load shedding helps mitigate failures of insufficient capacity. When you need to prioritize some things over others load shedding helps here.


> IMO, if you're an API company, and you can't handle bursts of traffic from your customers, you should work on improving your backend and stop wasting time messing around with implementing patterns like this. It's a lose-lose situation for you and your customers.

Counter-argument: That's not a reason not have these things, that's why you should think about implementing patterns like this before a unforeseen situation happens where they are useful. So they can limit the pain while you fix whatever causes the current load problem.

I agree when it comes to strict limits like "a single customer can only do X requests per hour", but that's not all rate limiting is good for.


I don't think the idea is to reduce API usage overall.

And, you don't make more money because someone's buggy script is hammering you with 1000 GET requests a second (for example).


It depends. If your product is your API, and you're getting paid from usage -- then usually yes -- the more API usage you get, the more $ you make, and the more success the customer has (hopefully!)


but they are not getting paid for an api call, they are getting paid for successful transactions going through their api. And if I can hammer the API with useless unworthy listing calls while big-customer cannot initialize transactions they are losing money.


> IMO, if you're an API company, and you can't handle bursts of traffic from your customers, you should work on improving your backend and stop wasting time messing around with implementing patterns like this. It's a lose-lose situation for you and your customers.

I'm surprised this isn't more obvious, especially to a company like Stripe that is built on one of the slowest programming languages/platforms in existence. It isn't hard to come up with a list of languages that would easily provide an order of magnitude more capability per server (yes, maybe compute is not their bottleneck, but I've never seen an RoR setup that was bottlenecked by anything other than RoR). And that not only makes it cheaper for the baseline, but it also makes it cheaper to overprovision for safely handling spikes, and both cheaper and faster to scale when needed.

And like you said, not responding to demand is just preventing your customers from giving you money...but it is worse. If you are a growing company and one of your badly needed services is your bottleneck, that service is gonna be the first one to go. API limiting should be considered an attrition risk.


Why not go out and build a better stripe with a better language and show them how it is really done? Clearly you know more about building web scale payment apis than they do.


Done! That's called Adyen :D

https://www.adyen.com/

It's an European company that's 10 times the size and has lower fees.

Stripe having a harder time on that side of the Atlantic ;)


Rate limiting is a way of failing gracefully. Ideally your request handlers never fail, but if they have to, having a rate limiter filter out some requests (based on rules you control) is usually better than the default failure behavior of your server framework.


I disagree with the parent comment, but why is everyone downvoting this? It was a politely presented opinion, and the following discussion is entirely civil.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: