Hacker News new | past | comments | ask | show | jobs | submit login

Typo on the front page:

We run our servers on Amazon Web Services and use their elastic load balancer (which is were we terminate SSL).

which is _why_ (?)




We've been bitten by EC2 instances having issues accepting incoming connections (multiple HAProxy boxes in TCP mode to an STunnel cluster), and we've never had that issue in testing with ELB. ELB also beats our failover time when we lose an EC2 machine.

But ELB is in no way a permanent part of our infrastructure (nothing is permanent) especially as we move to supporting technologies such as SPDY on spire.io or, for example for the right customer requirement, SSL throughout the network stack. We're also fond of Stud running on our internal servers. I do think ELB is the right tool for our cloud today.


Haven't you run into performance terminating SSL on ELB? For me the performance is so-so, as I'm using a 2048 bit key and it seems I hit the maximum requests per second limit pretty fast. There're a pair of threads regarding this issue on the AWS forums, where a user did a really exhaustive test of ELB and even got an Amazon engineer to really look into that issue:

https://forums.aws.amazon.com/thread.jspa?messageID=327283 https://forums.aws.amazon.com/thread.jspa?messageID=327715


We're looking pretty good in AWS West 1a. The second thread you linked shows great performance from markdcorner's second load balancer -- the https://forums.aws.amazon.com/servlet/JiveServlet/download/3... image -- vs the original ELB, which SpencerD@AWS describes as a custom ELB with some sort of customer-requested secret sauce (maybe some sort of "slow start" to some backend servers?). Indeed we have reached out to AWS a few times in the past and had some magic (at the time ciphers and removing SSL v2) done to our ELBs.

We have seen issues with performance on ELB which is why we originally went with TCP mode HAProxy on the edge of our stack to a cluster of STunnel servers, but again reliability was an issue here and our ELB performance with up to 10K rps looks great in benchmarks. Past 10K we are considering separate dispatchers behind a separate ELB. But at that point I am also tempted to, frankly, switch to our own metal.

Curious: are you comparing ELB performance vs High I/O EC2 instances (say m1.xlarge) open to the world?


Well, I didn't know you could fine tune your ELB's. In fact reading the Developer Guide (http://awsdocs.s3.amazonaws.com/ElasticLoadBalancing/latest/..., page 46) it seems that it's now possible to choose SSL protocols and ciphers via the web interface (and I supposse also via the API).

Regarding the comparison you suggest, we are too short-handed right know not only to do this kind of comparisons but even to think to manage our own load balancers ;) Thanks for the suggestion anyway.


Thanks, we're fixing it now.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: