Hi! This is super common feedback and something the team is definitely thinking about! What would you want to see it increased to? (Chris Munns from Serverless @ AWS)
This would be amazing! At lot of ML use cases are largely unfeasible in lambda on python without serious pruning. Latest version of tensorflow is 150mb uncompressed, add numpy pandas etc to that and it adds up fast. I think 1 GB uncompressed would be pretty reasonable in the current state of ML tools, personally.
As a thought, could Lambda (perhaps in cooperation with AWS Sagemaker?) offer a Lambda execution environment atop the AWS Deep Learning AMI? This would solve a lot of problems for a lot of people
Is there any plan to add more disk space, or a way to fan out jobs? We use a lambda that does work on video files, we have to limit how many run concurrently (3) to prevent running out of disk space right now. Edit - or ability to attach volumes like Azure's lambdas.
Can't wait for this too, it seems kind of old limitation not suitable for layers at all. But on another hand - lambda should be small and fast if something out of limits is needed then fargate or ecs should be used.
That said I I hope they increase the limit to at least 500 Mb sooner than next reInvent.