Fun fact, on the lowest memory available, cold start times are really bad in Python when you have a lot of libraries that need to be imported.
The Lambda runtime is exceedingly slow at that on the 128MB tier. This is in fact the case for Django apps, even in their default state.
Now here's where the fun fact starts: If a function times out during cold start, it hasn't successfully been warmed. That means upon its next trigger, it will coldstart again.
Now lets say you deploy a tiny little function over wsgi. You see in your metrics that it takes an avg of 200ms... now, you're a smart dev, you set its global timeout to 3 seconds down from the default of 30 seconds because you don't want to get billed more than necessary. But as it turns out, the cold start takes on average 10 seconds. Your function now never successfully completes.
Sure, but why would you do that? If you’re running Django, set up your ec2 instance, and run Django. If you’re running lambda, don’t spin up an entire framework in it to run for individual requests. That basically defeats the purpose. You’re not running a tiny little function, as you said. Your running an entire web framework that’s running one tiny function. Instead of doing that, write an actual simple function that does the one thing it needs to do. It will take 200-300 ms to spin up and run the first time, as well as on requests when it scales up, but otherwise will run in 2ms. Keep it really simple and stateless. If some part of that doesn’t work for your use case, then don’t use lambda.
It was not meant to run an entire framework in. If you’re reaching for an entire framework to run a simple function in a lambda, I’d bail on one or the other. Don’t use a framework, or don’t use lambda.
I mostly agree, but sometimes it's simply convenient to use the regular Django wsgi routines to serve parts of a Django app. The alternative there if you want to serve stuff behind URLs is using the Api Gateway and that is atrocious to work with. Also, whenever there's a discussion about lambda, people talk about "lock in" and the api gateway is a much bigger lock-in than lambda.
Example: I have an API in Django which uses DRF, served on a classic web server instance behind load balancers etc. Parts of that API, which have the same authentication/business logic/etc, are much more suited to lambda, I'm sure it's not hard to imagine :)
Also, Lambda is just the glue code. You have to use many other AWS stuff that really locks you in deep, even if you functionallity could "theoretically" be ported to different providers.
If there is a real need to port your infrastructure you can do it in small steps. Using single cloud provider is also not smart. If I need to do huge data processing I will never do it with AWS although this is where the rest of the infrastructure is as Google has better solutions. Just pick whatever is best.
The Lambda runtime is exceedingly slow at that on the 128MB tier. This is in fact the case for Django apps, even in their default state.
Now here's where the fun fact starts: If a function times out during cold start, it hasn't successfully been warmed. That means upon its next trigger, it will coldstart again.
Now lets say you deploy a tiny little function over wsgi. You see in your metrics that it takes an avg of 200ms... now, you're a smart dev, you set its global timeout to 3 seconds down from the default of 30 seconds because you don't want to get billed more than necessary. But as it turns out, the cold start takes on average 10 seconds. Your function now never successfully completes.