Hacker News new | past | comments | ask | show | jobs | submit login

> A server idling on Function A can't be called for Function B

why not?




The server in this context is a "warmed" Function VM. The first time a VM instantiates, it goes through all of the setup required to run a function. That includes generic reusable stuff like Operating System setup, but also Function-specific things like JIT compilation of the Function software, library loading, language loading, all of which are specific to the Function being executed.

The Function VM is hyper-optomised to that one Function's task. There's compute-time costs in changing its role from Function A to Function B. In the public cloud, the only cost you save on is starting up the OS, if both Functions happen to use the same OS. It's faster in most public FaaS infrastructure settings to just destroy a Function VM than it is to transfer a VM from Function A's configuration into Function B's configuration. Similarly, it's faster to just leave the VM running for a little while, just in case the Function is invoked again.

When the paper discusses VMs idling, that's the scenario they describe, where servers (or more accurately software VMs inside of physical servers), are locked into a specific function for some number of minutes. These minutes are dead computing time.

Meta, as a private cloud provider to themselves, can do some interesting things to reduce the cold-start of their Functions, meaning they can run more efficient clouds if they choose, by massively reducing the idle-time that's an industry standard in the public cloud.


> There's compute-time costs in changing its role from Function A to Function B.

Why do you need to change existing VM in any way? Having an idle VM for function A should not prevent the host from instantiating new VMs for function B. I still do not see where is the cost of an idle VM.

> servers (or more accurately software VMs inside of physical servers), are locked into a specific function

I think this needs to be more specific; what resources are exactly locked to a VM? Are they pinning VMs to specific CPUs or something that prevents the host from scheduling other tasks there? If so, why?

Also, you sound very confident in your answers, do you have some additional sources you could point me to, or are you also basing all this on this one paper?


> Why do you need to change existing VM in any way? Having an idle VM for function A should not prevent the host from instantiating new VMs for function B. I still do not see where is the cost of an idle VM.

Because the host can only run so many VMs. What's being described is host resource exhaustion from running idle VMs.


But that doesn't still explain the original statement

> waiting time should be reduced by a factor of 10 or more, because starting a VM consumes significantly fewer resources than having a VM idle for 10 minutes

To me its not obvious why you would want to have time-based eviction of VMs ever. Surely it would be more efficient to evict VMs only as response to some resource pressure? Basically I'm imagining keeping VMs in some LRU style structure where they could get kicked out when needed instead of just having 1min timer.

But that is on abstract level, on concrete level its also not obvious at all what are the constraining resources that VMs consume? Especially in a sense that would be comparable to the resource consumption of starting a new VM.

I realize that they are probably implying that they have some sort of static resource allocation for VMs meaning that idle VM consumes as much "resources" as active VM. But its not stated anywhere in the paper and it is weird to make claims(/recommendations) that hinge on such silent assumptions, especially if they are in no way universal. I feel this thing of improving utilization be reducing idleness is somewhat central part of the paper, so that's why I latched on this thing.

I also realize that managing resources efficiently with VMs is bit more involved than traditional processes (=containers), but at the same time this is where Meta, running private cloud and practically owning the whole stack, could really do much more than public cloud ever could. They could rely on more VMs and host cooperating on resource allocation, and even have the possible language runtime (HHVM etc) cooperate here somehow.

Indeed now that I think about it, sounds like interesting question how would you design FaaS-optimized combined VMM, VM, and language runtime stack. They touch on that with their JIT caches, but that seems one fairly narrow optimization, I'd imagine there is lot you could do here. The end-game might look something like unikernels, or maybe something completely different.

(and I'm sorry if my comments have come off as combative, but this did stick out to me and I am just curious)


There's a table on the 2nd page of the paper that gives a summary of what goes on when the VM's container is initialised the first time:

INITIALIZATION PHASE: (1) Start the VM.

(2) Fetch the container image and the function's code.

(3) Initialize the container.

(4) Start the language runtime such as Python or PHP.

(5) Load common libraries into memory.

(6) Load the function code into memory.

(7) Optionally, do JIT compilation. INVOCATION PHASE:

(8) Invoke the function multiple times as needed. SHUTDOWN PHASE:

(9) Stop the container if it receives no requests for X minutes (X=10/20/10 minutes for AWS/Azure/OpenWhisk respectively).

(10) Optionally, stop the VM

>I think this needs to be more specific; what resources are exactly locked to a VM?

Steps 1-7 are time-consuming processes, which load a container into memory so that step 8 can run rapidly when invoked later. Step 8 can be invoked an unlimited number of times on a VM once steps 1-7 have completed. Only the first Function execution on a VM pays the startup penalty.

>Are they pinning VMs to specific CPUs or something that prevents the host from scheduling other tasks there? If so, why?

They're pinning memory to VMs in their FaaS infrastructure, which in turn is memory consumed on their physical servers. A VM uses some number of MBs of memory to host a Function's unique container, its runtime, and its executables. Meta's FaaS solution completes several trillion operations per day, which is in the tens of millions per second avg. This relatively small amount of memory per VM becomes enormous at tens-of-millions-per-second scale.

>If so, why?

The resources are pinned to the memory by the first function initialisation, so that if there's a second request for the same Function, it can be executed on a pre-warmed VM, without going through the time-consuming steps 1-7. A trade-off between memory-consumption and time, both cost money.

There is some optimal amount of time between the end of the final Function invocation of step 8, and the start of step 9. If the amount of time is 0-seconds, then every Function request goes through steps 1-7. While if the amount of time is 1-hour, at midnight when Meta's large one-a-day batch jobs run, most of the servers fill with idle VMs until 1am, waiting to satisfy a batch-job that never instantiates.

Given that infrastructure cannot be aware of which Function execution is "the final Function invocation", it's challenging to pin this amount of time down. In a public cloud, that optimal amount of time is set in the 10s of minutes. In Meta's case, as a private cloud provider to themselves, they have a unique awareness to the usage pattern of their Functions, and therefore can predict more acurately that they don't need 10-minute-idle-Functions. They've instead been able to reduce the idle time to 1 minute. This releases Containers & VMs, and returns memory back into the server pool.

>do you have some additional sources you could point me to, or are you also basing all this on this one paper?

Everything I've said about Meta is based on this paper, because it comprehensively describes Meta's unique circumstances, and how they approached the problem. The paper itself has dozens of additional sources.


> They're pinning memory to VMs in their FaaS infrastructure, which in turn is memory consumed on their physical servers

The paper doesn't say this anywhere. There is no reason why idle VM could not release significant amount of its memory, or it to be paged on disk, or some hybrid in-between. Especially in Linux you have this whole virtual memory, disk cache, and memory mappings tightly coupled together so there is all sorts of things that could happen. It would be naive to just assume that a VM uses some static amount of memory, especially when the paper doesn't actually indicate anything like that.

> This relatively small amount of memory per VM becomes enormous at tens-of-millions-per-second scale.

Not really. a single firecracker vm has memory overhead of <5MB. even thousand VMs on a host is still just few gigabytes of overhead, not exactly enormous. Its very unlikely that they have anywhere near thousand VMs on single host.

> The resources are pinned to the memory by the first function initialisation, so that if there's a second request for the same Function, it can be executed on a pre-warmed VM, without going through the time-consuming steps 1-7. A trade-off between memory-consumption and time, both cost money.

That just explains why you want to have the VM running, not why you'd have some fixed resources pinned on them. They are two very different things. And it especially doesn't explain why you'd stop the VM on time basis instead of e.g. based on memory pressure.

Honestly, this feels like talking to AI. Not sure why you fill your comments with elementary basics of FaaS and making a whole lot of unfounded assumptions, while not really giving much of any real substance.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: