Hacker News new | past | comments | ask | show | jobs | submit login

First of all, it's pointless to monitor how much of your system is in use by 'number of users'. That's not a metric. You look at your iops metrics to figure out how loaded it is. Once you've gathered trending data you can then come up with an average iop load for a given number of users.

Secondly, you should know what your capacity is. Stress testing exists for a reason.




First of all, no, it isn't. At a large enough scale, any metric that goes up with usage is a perfectly fine proxy for many conversations. Every day for a decade we had the same 3% of our user base online at peak doing the same transactions as yesterday. Over longer periods, the number crept up and the transaction mix changed, but "number of simultaneous users" got me accurate, if not precise, capacity planning from 27 to 1.5 million users.

Stress-testing, shared-nothing and dollar-scalable are platonic ideals, and they're not always achievable. If Dropbox had three infrastructure engineers, they probably weren't able to build proper capacity planning models, and probably couldn't afford to build a full production work-alike for stress testing anyway. (And at some scales, that's literally impossible. Our vendors couldn't physically manufacture enough servers to build a full test environment, cost aside.) I'm sure they did some simulated tests as well, but those won't tell you the whole story.

You're focused on IOPS, but you have no idea if that's what Dropbox's bottlenecks were. (Not to mention: What does IOPS mean on an EBS and S3 infrastructure?) Complex systems fall over in complex ways. You can predict the next bottleneck, but not the one after that; by the time you get there, your fix for the first bottleneck will have changed the dynamics.

It sounds like they did do stress testing, using real-world loads, on a system that was 100% similar to their production system. They ran continuous just-in-time stress tests in the Big Lab.


Maybe your application/environment was stable enough that site performance didn't change drastically over a decade. I've seen performance shift dramatically with the same number of users for different reasons (high-post flattened comment threads being slammed when Michael Jackson died, software upgrades which mysteriously load up servers on some requests, and of course feature adds). When a site issue occurs, I don't reach for "how many users are logged in?", I reach for my sorted list of machine and service-specific metrics and look for irregularities.

That being said, trends in user visits are of course great numbers for capacity planning because you have an idea how much growth to expect in the near future. But it's only a vague multiplier; you need to know how beefy a box to get (by stress testing to determine capacity) and then multiply by the growth factor. But it's usually more complicated than this.

Stress testing doesn't have to be a formal process in all environments. You might just have a developer with a new chat server and they want to get a benchmark of how many users can join and chat before CPU peaks. An hour or two of coding should provide a workable test on like-hardware, which can then be generalized with tests of other software to give an idea of the capacity when a certain number of users are logged in and performing the same operations. The point isn't to know 100% when you will fall over, but to have at least an idea when you're going to fall over, so you don't have to actually fall over to figure out when and where to scale.

I have no problems with very-short-term big lab stress testing. We had the same issue at my last place, and with lots of caution, it worked fine. But jesus christ, if I told my bosses "I think we should run all the servers with extra load until they fall over, then re-evaluate", they'd look at me like I had antlers growing out of my head.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: