Hacker News new | past | comments | ask | show | jobs | submit login

We've hit 3 problems with nginx:

1. Exactly this, we had mystery double trades from our clients and it took us a long time to realise it was nginx assuming we timed out and routing traffic to the next server

2. It doesn't do health checks. When a server goes down it will send 1 out of every 8 real requests to the down server to see if it responds. Having disabled resubmitting of requests to avoid the double trade issue above this means when one of our servers is down, 1 out of every 8 requests will have an nginx proxy error which is significant when you have multiple API calls on a single page

3. This isn't something I've personally hit so can't explain the nitty gritty but it's something one of my coworkers dealt with: outlook webmail does something weird where it opens a connection with a 1GB content size, then sends data continually through that connection, sort of like a push notification hack. Nginx, instead of passing traffic straight through, will collect all data in the response until the response reaches the content size provided in the header (or until the connection is closed). I don't know if nginx is to blame for this one or not, but I do feel that when I send data through the proxy, it should go right through to the client, not be held at the proxy until more data is sent.

HAProxy also solved our issues and is now my go-to proxy. Data goes straight through, it has separate health checks, and it better adheres to HTTP standards. It can also be used for other network protocols which is a bonus.




Whilst Nginx doesn't do healthchecks, they are available in Nginx Plus. I do appreciate that it is a charged for product, but it has a number of strong features over and above the OSS version and of course support (who are very responsive indeed).


3. is the reason why NGinX is the recommended proxy in front of webapps with scarce parallelism (for example Ruby with Unicorn; see http://unicorn.bogomips.org/PHILOSOPHY.html for an explanation) when "slow clients" are to be expected. NGinX is protecting the webapp from blocked workers by slow clients and Outlook Webmail seems to behave just like one. I don't know by heart how to tune this behavior if one wants to avoid it but this property is the main reason we use NGinX.


That's… unique - and wrong - spelling of the name. (Pet peeve of mine, people spell my app's name in all sorts of bizarre ways too.)


This sounds like something else. In the outlook case their servers they seem to use the connection as a stream (which is actually valid, although not really supported by browsers outside of the event-stream class), where the server only writes little chunks of data of a time. But the server there not hindered from writing by a slow client - it simply has not more data to write at that point of time.


Regarding 3, buffering behavior is highly configurable in nginx.(eg. proxy_request_buffering, proxy_buffering on/off)


Wrote a post where I ran into (and fixed) this problem with streaming uploads through nginx: http://killtheradio.net/technology/nginx-returns-error-on-fi...


It's only as of 1.8 that you can disable buffering of incoming requests though. Just a few month iirc.


Nginx can also be used for other protocols, see stream block.


I didn't know that. Thanks for the correction.




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: