Hacker News new | past | comments | ask | show | jobs | submit login

That seems like a rather strange concept.

In practice they will have to drastically undersell their hosts in order to guarantee the burst-capacity.

If you need to reserve 8G for my 256M instance because I could burst to 8G at any time - then why not just sell me the 8G instance directly?

Perhaps they have some really interesting use for this "volatile" spare-capacity (something like the EC2 spot-market?), but that seems like an awfully complex endeavor.




We have developed a unique way to increase density however still allow you to resize your instance in under 500 ms.


That's not really answering the question, is it? Increasing density just means you get to buy fewer servers. It doesn't deal with the problem of everyone on a maxed-out server asking for an increase at the same time.


That's the difference between guaranteed capacity and most services. It's very common to use the properties of normal usage patterns and pool the excess capacity. The phone system has done that for ages. Ever try to make a call during an emergency?

Just because their service doesn't cover certain extreme situations does not mean it's worthless. It would worth knowing how much excess capacity they are pooling together, but vendors generally won't release those types of details.


I supposed that's the point I was working towards: they must be overcommitted, so it's a funny thing to be cagey about. The exact numbers don't matter as much.


Can you give more details on that?

What happens when I have, say, 50x 512M instances and decide to resize them all at once to 8G? Or do you generally limit the burst-capacity to twice the base-capacity?


How we do it would be giving away the kitchen sink.

There is no soft limit on how much you can scale. We attempt to distribute all yours apps as much as possible to ensure there are enough resources for you to grow in to. If there is no more capacity on the host server for your instance(s), you are able to transparently migrate to one which does. This happens within 2 seconds.


It's a neat technology and I'm very interested in using it. It's almost a perfect fit for my business, which deals with serving traffic during email spikes.

I think the questions are just around the economics to make sure you guys stick around and can continue to offer it. My initial thought was that being able to scale up and down so quickly would require you to have a lot of idle resources sitting around at any given time, both on individual hosts as well as across hosts.


Thanks. Fair point about whether we stick around: You'd expect me to say we intend to but the proof is in the pudding. We have a lot to do and long term vision. As for the economics a key point is that the resizing allows us/users to optimize much better the allocation of resources to what is needed


This happens within 2 seconds.

I'm sorry but statements like that freak me out a bit.

Do I get a warning when my instance is going to be migrated instead of resized, and an estimate of the downtime?

Because unless you found a way to defy physics you are pretty surely not migrating e.g. a 4GB Ram/40GB disk instance in anywhere close to 2 seconds.


Actually..

If you had all of the instance data stored on a fast SAN, so that it was available to all of the hosts simultaneously (possible.)

And you had an ultra-high speed interconnect (40Gbit Infiniband would do) between hosts, for sharing the memory state when you migrate..

4GB of memory, at 40Gbit/s would be transferred in 0.8s. (assuming perfect throughput, all cows are spherical, etc..)


Actually... you do not need that fast a network. You can converge state incrementally rather than transfer it in one go. Xen and VMware do this. Surely the cost of a fast SAN and interconnect would quickly outpace the (imho minor) savings from smaller memory allocations.


Yeah, you can do that too..


So instead of telling us the limitations up front, we have to sign up for your service and stress test it ourselves to find out.


And this is different to any other services, like GAE, AWS, etc, how?


One generally trusts Google and Amazon to have enough capacity more than one trusts a startup.


At least with AWS, the model is relatively familiar, which makes the failure modes predictable. This seems to be breaking new ground, and is likely to fail in New and Interesting ways.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: