One of the massive benefits for Rust on Lambda is (as the author mentions) "extremely low start-up time, CPU usage and memory footprint". It's not clear to me if deploying Rust to Lambda via a docker image actually negates some of those benefits.
Using Lambda functions stored in ECR has no impact on performance from my experience. AWS Lambda uses Firecracker under the hood, which builds a VM from a container image. It's likely that non-ECR image based Lambdas are actually packaged as a container image before being launched into a Firecracker VM.
Good question! I ran some benchmarks before because I was also curious and besides some slightly larger standard deviation for cold starts execution time, there was so significant difference in performance when using docker.
An alternative to this is to statically compile and use scratch docker image.
Another tip is if writing a http app etc Axum/Actix have Lambda shims.
I add a cli flag “—lambda” which enables the shim. That means I can run the app locally, in Kubernetes, ECS, Lambda with minimum effort. It also makes dev easier pretending lambda doesn’t exist.
I’ve done similar with Python based apps a few times, there is something quite satisfying about a Docker image that can run in multiple contexts! (I’ve also distributed a Go based app using a FROM scratch image, but that’s not quite as cool imo).
Those are great tips! I'll have a look at the axum shim, which certainly would help if using axum in the first place. Minimal functions like we have at my work rarely need a fully featured framework however.