There's a few things at play. Functions will still stay warm inbetween invocations and will keep local any data already in the worker. We also maintain a couple different levels of cache so as to not hit ECR often.
I know we've got a few blog posts coming out over the next couple weeks on this new feature, and each tells a few bits and pieces about the story.
Depending on volume you'll probably find that Lambda will be cheaper for that workload, especially with the new 1ms billing.
FWIW, a little experimented I just ran showed me that with simple layers the cold start time of my little 3MB Go app was <100ms, using the Docker image `amazon/aws-lambda-go:1` instead took ~1500ms.
- - - -
REPORT RequestId: f905d5fe-a64e-48c8-b1f2-6535640a6f82 Duration: 7.55 ms Billed Duration: 1309 ms Memory Size: 256 MB Max Memory Used: 49 MB Init Duration: 1301.10 ms
- - - -
REPORT RequestId: 89afb20d-bc49-4d89-91f0-f1ef62ac99aa Duration: 12.20 ms Billed Duration: 13 ms Memory Size: 256 MB Max Memory Used: 35 MB Init Duration: 85.37 ms
I know we've got a few blog posts coming out over the next couple weeks on this new feature, and each tells a few bits and pieces about the story.
Depending on volume you'll probably find that Lambda will be cheaper for that workload, especially with the new 1ms billing.
- Chris - Serverless@AWS