In practice they will have to drastically undersell their hosts in order to guarantee the burst-capacity.
If you need to reserve 8G for my 256M instance because I could burst to 8G at any time - then why not just sell me the 8G instance directly?
Perhaps they have some really interesting use for this "volatile" spare-capacity (something like the EC2 spot-market?), but that seems like an awfully complex endeavor.
That's not really answering the question, is it? Increasing density just means you get to buy fewer servers. It doesn't deal with the problem of everyone on a maxed-out server asking for an increase at the same time.
That's the difference between guaranteed capacity and most services. It's very common to use the properties of normal usage patterns and pool the excess capacity. The phone system has done that for ages. Ever try to make a call during an emergency?
Just because their service doesn't cover certain extreme situations does not mean it's worthless. It would worth knowing how much excess capacity they are pooling together, but vendors generally won't release those types of details.
I supposed that's the point I was working towards: they must be overcommitted, so it's a funny thing to be cagey about. The exact numbers don't matter as much.
What happens when I have, say, 50x 512M instances and decide to resize them all at once to 8G? Or do you generally limit the burst-capacity to twice the base-capacity?
How we do it would be giving away the kitchen sink.
There is no soft limit on how much you can scale. We attempt to distribute all yours apps as much as possible to ensure there are enough resources for you to grow in to. If there is no more capacity on the host server for your instance(s), you are able to transparently migrate to one which does. This happens within 2 seconds.
It's a neat technology and I'm very interested in using it. It's almost a perfect fit for my business, which deals with serving traffic during email spikes.
I think the questions are just around the economics to make sure you guys stick around and can continue to offer it. My initial thought was that being able to scale up and down so quickly would require you to have a lot of idle resources sitting around at any given time, both on individual hosts as well as across hosts.
Thanks. Fair point about whether we stick around: You'd expect me to say we intend to but the proof is in the pudding. We have a lot to do and long term vision. As for the economics a key point is that the resizing allows us/users to optimize much better the allocation of resources to what is needed
Actually... you do not need that fast a network. You can converge state incrementally rather than transfer it in one go. Xen and VMware do this. Surely the cost of a fast SAN and interconnect would quickly outpace the (imho minor) savings from smaller memory allocations.
At least with AWS, the model is relatively familiar, which makes the failure modes predictable. This seems to be breaking new ground, and is likely to fail in New and Interesting ways.
In practice they will have to drastically undersell their hosts in order to guarantee the burst-capacity.
If you need to reserve 8G for my 256M instance because I could burst to 8G at any time - then why not just sell me the 8G instance directly?
Perhaps they have some really interesting use for this "volatile" spare-capacity (something like the EC2 spot-market?), but that seems like an awfully complex endeavor.