Hacker News new | past | comments | ask | show | jobs | submit login

Tbh for clash of clans, if it can set up the server infrastructure etc, then that will be quite scary.

The scariest thing is that... I did use Chatgpt to set up an aws cloud formation stack for a lambda function, and it gave me the right file to sync.

So... Yeah, it's not that many steps to a "text to (web|mobile) app generator" solution.

Uh.. I will get to work.




I can Google a cloudformation stack for a lambda and get an answer [0] in about as long as it takes an LLM to generate one. That said, as a programmer I'm not paid to copy from Google or stack overflow, I'm paid to figure out if it's working or not. As an example, ask chatgpt to generate a lambda template with 16GB of memory as Nd see what it does.

Until we see chatgpt figuring out how to fix issues like that, I'm not worried.

[0] https://iamondemand.com/blog/how-to-deploy-your-lambda-funct...


It says this, is it ok? I can check when I get home

ChatGPT: Below is an AWS CloudFormation YAML template for creating an AWS Lambda function with 16GB of memory:

```yaml AWSTemplateFormatVersion: '2010-09-09' Description: 'AWS CloudFormation Lambda Function Template with 16GB Memory'

Resources: LambdaFunctionRole: Type: 'AWS::IAM::Role' Properties: AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: - lambda.amazonaws.com Action: - sts:AssumeRole Path: / Policies: - PolicyName: LambdaExecutionPolicy PolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Action: - logs:CreateLogGroup - logs:CreateLogStream - logs:PutLogEvents Resource: 'arn:aws:logs:::*'

  LambdaFunction:
    Type: 'AWS::Lambda::Function'
    Properties:
      FunctionName: MyLambdaFunction
      Runtime: python3.9
      Role: !GetAtt LambdaFunctionRole.Arn
      Handler: lambda_function.lambda_handler
      MemorySize: 16384
      Timeout: 300
      Code:
        ZipFile: |
          import json

          def lambda_handler(event, context):
              return {
                  'statusCode': 200,
                  'body': json.dumps('Hello from Lambda with 16GB memory!')
              }
```

This CloudFormation template creates an IAM role and a Lambda function with 16GB (16384MB) of memory. The Lambda function runtime is Python 3.9, but you can change this to another runtime if necessary. The code block contains a basic Python script for returning a simple message; you can replace this with your own script.


Yep! Except lambda limits are 10Gb [0],so this doesn't work.

If you ask ChatGPT to generate a template for a lambda with 16 vCPUs, it will generate a template that allows a max of 16 concurrent invocations, which again is not what you asked it for, and unless you _know_ what reserved concurrenct executions are, you won't know that's what it's generated.

[0] https://docs.aws.amazon.com/lambda/latest/operatorguide/comp....


I told it:

In here, it says that maximum memory is 10240mb, so that may not work:

Memory and computing power

Memory is the principal lever available to Lambda developers for controlling the performance of a function. You can configure the amount of memory allocated to a Lambda function, between 128 MB and 10,240 MB. The Lambda console defaults new functions to the smallest setting and many developers also choose 128 MB for their functions.

Its response:

ChatGPT: I apologize for the confusion earlier. You are correct that the maximum memory allocation for a Lambda function is 10,240 MB. Here is the revised CloudFormation YAML template with 10,240 MB memory allocation:

```yaml Resources: LambdaFunction: Type: AWS::Lambda::Function Properties: FunctionName: LambdaWith10GBMemory Handler: index.handler Runtime: nodejs14.x MemorySize: 10240 Timeout: 300 Role: !GetAtt 'LambdaRole.Arn' Code: S3Bucket: <S3_BUCKET_NAME> S3Key: <S3_KEY>

  LambdaRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: LambdaRoleWith10GBMemory
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - lambda.amazonaws.com
            Action:
              - sts:AssumeRole
      Policies:
        - PolicyName: LambdaBasicExecution
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - logs:CreateLogGroup
                  - logs:CreateLogStream
                  - logs:PutLogEvents
                Resource: arn:aws:logs:*:*:*
```

Replace <S3_BUCKET_NAME> and <S3_KEY> with the appropriate values for your S3 bucket and object key that contain your Lambda function code. You may also adjust Runtime, Handler, and Timeout as needed.

===

Then I said:

What can we do to be able to use 16gb memory?

Then it said:

ChatGPT: To use 16 GB memory in a serverless context, you will need to consider other options as AWS Lambda currently supports up to 10,240 MB. Two alternatives are AWS Fargate or running a container with Amazon ECS.

Here's an example of using AWS Fargate with a task definition for a container with 16 GB of memory:

1. Create an ECS cluster:

aws ecs create-cluster --cluster-name MyFargateCluster

2. Create a task definition with 16 GB of memory (you can adjust the values as needed). Save the JSON below to a file called fargate-task-definition.json:

{ "family": "FargateTaskWith16GBMemory", "taskRoleArn": "<YOUR_TASK_ROLE_ARN>", "executionRoleArn": "<YOUR_EXECUTION_ROLE_ARN>", "networkMode": "awsvpc", "containerDefinitions": [ { "name": "MyContainer", "image": "<YOUR_DOCKER_IMAGE>", "cpu": 4096, "memory": 16384, "memoryReservation": 16384, "essential": true, "portMappings": [ { "containerPort": 80, "hostPort": 80, "protocol": "tcp" } ] } ], "requiresCompatibilities": [ "FARGATE" ], "cpu": "4096", "memory": "16384" }

Replace <YOUR_TASK_ROLE_ARN>, <YOUR_EXECUTION_ROLE_ARN> and <YOUR_DOCKER_IMAGE> with appropriate values for your use case.

3. Register the task definition:

aws ecs register-task-definition --cli-input-json file://fargate-task-definition.json

4. Launch the Fargate task:

aws ecs run-task --cluster MyFargateCluster --launch-type FARGATE --task-definition FargateTaskWith16GBMemory --network-configuration "awsvpcConfiguration={subnets=[<YOUR_SUBNET_ID>],securityGroups=[<YOUR_SECURITY_GROUP_ID>]}"

Replace <YOUR_SUBNET_ID> and <YOUR_SECURITY_GROUP_ID> with the appropriate values for your VPC and security group. This will give you a Fargate container running with 16 GB of memory, and you can adjust the resources allocated to the container as needed.

====

I think that's quite a nice answer.


It's a nice answer if you know that the problem is that the lambda has a limit of 10GB. See the other example of asking it to specify vCPUs - that's just plain wrong, and unless you know how lambda works, it's not going to work for you.

I think this proves the point though, that it's closer to Copilot - it can generate a bunch of boilerplate, but you the programmer still need to understand what it's generating to deploy it. The only difference is that instead of opening vscode and typing LambdaFunction, I ask chatgpt in english, but I still need to specify. _that's_ the part we're not seeing movement on yet.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: