Hacker News new | past | comments | ask | show | jobs | submit login

mm, strange math.

179 means that while app holding and communicating to 1 million persistent connections it is still able to process 100+ standard requests per second. pretty enough to accept new clients to your online game, audio/video chat, podcast etc.

this graph shows how many requests per second app may process depending on amount of established persistent connections:

https://raw.github.com/slivu/1mc2/master/results/requests-pe...




I don't think your original post is very clear at all. I get the same number as the post you are criticizing: 1000000/179 / 60 / 60 = 1.55 hours.

Are there other messages being processed too? Which direction are the messages going? How big are they, what do they contain, how is the data processed. Also, if you have 1 million clients how do you handle all of them arriving at around the same time? How long did it take your system to get up to 1 million connections? Also how sure are you that each micro instance is sending and receiving the messages at exactly the expected rate and is not getting overloaded. What server type are you using for your central server?


"pretty enough to accept new clients to your online game, audio/video chat, podcast etc."

If 'pretty enough' means only allowing 179 new connections to your server per second, that's great. When you have a large event, everyone gets on your site at the same time, not to mention times like getting to work, lunch break, evening rush, etc. You ever hear of the slashdot effect? That's more than 200 requests per second.

Ignoring the poor connection time, you could only have 179 users actively using your app every second. Out of a million. %0.000179 of your user base. Talk about really shitty user engagement.

I'm not even going to talk about the incredibly bad idea it is to host a million connections using one server. The idea that "most websites" only do "about 100 requests per second" is laughable. Sure, the average may be 100, over a month, but that's nothing compared to peak times. Try tens of thousands per second. A high-traffic site might do something on the order of thousands of database writes per second. Which, when all those connections come in, will kill your database servers, which backs up your frontends, which is why you have to have fast forward-facing pre-loaded cache. But I digress.

Focus on scaling your application to actually handle traffic before you obsess over concurrent connections.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: