That's a good question. I can't point to published figures, since the 2.7 runtime is still fairly new, but I can say that based on both my personal experience and based on fairly basic reasoning, the per-thread memory overhead is definitely a lot less than what's required by the whole instance. The entire of the Python standard library, along with your framework and other libraries, are shared overhead between all the threads.
The issue with charging by CPU hour was that you could occupy memory-seconds as much as you wanted without charge; that's no longer the case - by charging for instances, we're implicitly charging for the memory they use.
As far as determining how many instances you run - you can do this to a large degree, both by setting budget limits, and by setting scheduler parameters.
The issue with charging by CPU hour was that you could occupy memory-seconds as much as you wanted without charge; that's no longer the case - by charging for instances, we're implicitly charging for the memory they use.
As far as determining how many instances you run - you can do this to a large degree, both by setting budget limits, and by setting scheduler parameters.