I can Google a cloudformation stack for a lambda and get an answer [0] in about as long as it takes an LLM to generate one. That said, as a programmer I'm not paid to copy from Google or stack overflow, I'm paid to figure out if it's working or not. As an example, ask chatgpt to generate a lambda template with 16GB of memory as Nd see what it does.
Until we see chatgpt figuring out how to fix issues like that, I'm not worried.
This CloudFormation template creates an IAM role and a Lambda function with 16GB (16384MB) of memory. The Lambda function runtime is Python 3.9, but you can change this to another runtime if necessary. The code block contains a basic Python script for returning a simple message; you can replace this with your own script.
Yep! Except lambda limits are 10Gb [0],so this doesn't work.
If you ask ChatGPT to generate a template for a lambda with 16 vCPUs, it will generate a template that allows a max of 16 concurrent invocations, which again is not what you asked it for, and unless you _know_ what reserved concurrenct executions are, you won't know that's what it's generated.
In here, it says that maximum memory is 10240mb, so that may not work:
Memory and computing power
Memory is the principal lever available to Lambda developers for controlling the performance of a function. You can configure the amount of memory allocated to a Lambda function, between 128 MB and 10,240 MB. The Lambda console defaults new functions to the smallest setting and many developers also choose 128 MB for their functions.
Its response:
ChatGPT: I apologize for the confusion earlier. You are correct that the maximum memory allocation for a Lambda function is 10,240 MB. Here is the revised CloudFormation YAML template with 10,240 MB memory allocation:
Replace <S3_BUCKET_NAME> and <S3_KEY> with the appropriate values for your S3 bucket and object key that contain your Lambda function code. You may also adjust Runtime, Handler, and Timeout as needed.
===
Then I said:
What can we do to be able to use 16gb memory?
Then it said:
ChatGPT: To use 16 GB memory in a serverless context, you will need to consider other options as AWS Lambda currently supports up to 10,240 MB. Two alternatives are AWS Fargate or running a container with Amazon ECS.
Here's an example of using AWS Fargate with a task definition for a container with 16 GB of memory:
2. Create a task definition with 16 GB of memory (you can adjust the values as needed). Save the JSON below to a file called fargate-task-definition.json:
Replace <YOUR_SUBNET_ID> and <YOUR_SECURITY_GROUP_ID> with the appropriate values for your VPC and security group.
This will give you a Fargate container running with 16 GB of memory, and you can adjust the resources allocated to the container as needed.
It's a nice answer if you know that the problem is that the lambda has a limit of 10GB. See the other example of asking it to specify vCPUs - that's just plain wrong, and unless you know how lambda works, it's not going to work for you.
I think this proves the point though, that it's closer to Copilot - it can generate a bunch of boilerplate, but you the programmer still need to understand what it's generating to deploy it. The only difference is that instead of opening vscode and typing LambdaFunction, I ask chatgpt in english, but I still need to specify. _that's_ the part we're not seeing movement on yet.
The scariest thing is that... I did use Chatgpt to set up an aws cloud formation stack for a lambda function, and it gave me the right file to sync.
So... Yeah, it's not that many steps to a "text to (web|mobile) app generator" solution.
Uh.. I will get to work.