Hacker News new | past | comments | ask | show | jobs | submit login

Asking the same question to Chatgpt

To connect a Lambda function to an RDS (Relational Database Service) instance and query it safely, you can follow these general steps:

1. Configure your Lambda function's execution role: Ensure that your Lambda function has the necessary permissions to access the RDS instance. The role should include the `AWSLambdaVPCAccessExecutionRole` and the appropriate RDS permissions.

2. Place your Lambda function and RDS instance in the same VPC: Ensure that both the Lambda function and the RDS instance are in the same Virtual Private Cloud (VPC) to enable their communication. You can configure the VPC settings in the AWS Management Console.

3. Configure the security group settings: Set up the security groups for your Lambda function and RDS instance to allow the necessary inbound and outbound connections. The Lambda function's security group should allow outbound access to the RDS instance's security group on the RDS port (usually port 3306 for MySQL or 5432 for PostgreSQL).

4. Create a subnet group: If your RDS instance is in a private subnet, create a subnet group that includes the appropriate subnets for your Lambda function to access the RDS instance securely.

5. Connect to the RDS instance from the Lambda function: In your Lambda function code, establish a database connection using the appropriate database client library, such as pymysql for MySQL or psycopg2 for PostgreSQL. Use the connection details (endpoint, username, password, etc.) of your RDS instance.

6. Execute queries safely: To query the RDS instance safely, ensure that your Lambda function's code incorporates secure coding practices. Use parameterized queries or prepared statements to prevent SQL injection attacks. Avoid storing sensitive information (such as database credentials) in your Lambda function code and instead use environment variables or AWS Secrets Manager for secure storage and retrieval.

Remember to regularly update and patch your Lambda function, RDS instance, and associated dependencies to maintain security and stay protected against potential vulnerabilities.

It's worth noting that the exact implementation details may vary depending on your specific use case, programming language, and AWS setup. Consulting the AWS documentation and resources related to Lambda, RDS, and VPC networking can provide more detailed and up-to-date instructions for your specific requirements.




This all looks about right, but how would somebody without aws experience be able to validate it without making a mess?


This to me is why I don't use ChatGPT. Every time I use it I get answers that are questionable in validity. I'd say that 3/4 of my questions have returned incorrect answers at least partially. Maybe 10% are wildly incorrect with the remainder being partially incorrect.

So if it is wrong with MOST of the questions that I am able to validate myself, then how can I trust it on the questions that I am unable to validate myself.

The whole reason I started even doing this passive research is because I had an employee late last year who used to be a decent employee (he was never an all-star, but he got the job done to an average/satisfactory level), and started to all of a sudden perform incredibly poorly.

He was submitting code and solving problems that were just really bad. He was always just an average producer, and was always teetering on the edge during performance reviews, doing just well enough that we kept him around. But he quickly started to make mistake after mistake. Several code reviews I found really strange artifacts and comments in his code that were blatant mistakes. I confronted him about them over the course of several performance reviews and said he blamed stackoverflow "copy pasta". Eventually I actually fired him, since it was getting to a point that almost everything he submitted or produced was problematic in some way and he was burning more of my time than he was saving. So I ended up firing him.

While firing him, in front of HR he finally broke down and admitted that he has been using ChatGPT for everything and he begged us to let him stay and he would stop using it altogether. I of course didn't care at that point and we let him go. But I started to realize the increase of mistakes were all due to ChatGPT leading him astray.

That whole experiment really taught me that ChatGPT is not ready for primetime. If you blindly trusted ChatGPT you will find yourself in the wrong place most of the time. The problem is that unless you already know the answer to the question you are asking, it is very difficult to tell where chatgpt's answers might be correct and were they might be incorrect (because it is usually a mixture of both). This makes it entirely useless for asking questions that you are not comfortable validating.


Presumably you're a good manager but the way you told it makes you sound like a bad manager that didn't dig into what's going on with a poor performing employee and let them flail around until you had to fire them. There's more to that story and it's not really about ChatGPT at all.

There are other stories out there, like https://hyperbo.la/w/chatgpt-4000/ which shows it can useful and a force multiplier when used well, but it's like giving a faster car to a bad driver. It'll just result in them crashing faster. If you've got a programmer that doesn't want to program, ChatGPT can't help them be a better programmer since they don't actually want to be one!


Why’d he be so reluctant to confess to using ChatGPT? And why would he continue using it despite constant feedback that his performance was getting so much worse?


It's like gambling for some people... No self control.


> So if it is wrong with MOST of the questions that I am able to validate myself, then how can I trust it on the questions that I am unable to validate myself.

If you’re using it to generate code, you can validate it yourself - run the code.


ChatGPT doesn't need to provide perfect AWS solutions for people without AWS experience; the onus is on the human to do the work necessary on their side to be sure of the solution it produces.

It should be thought of as a time saving tool for experts (and people willing to put the work in), not a magic button for lazy people.


One miss I see is this

> and the appropriate RDS permissions.

The role doesn’t need any IAM RDS permissions. It needs network access. But then all of the access to the database is controlled by your standard database permissions - not IAM.

I guess there are corner cases like using IAM permissions to connect to your database which can be done. But is not typical


It still isn't tho, this will work for testing, maybe, but each lambda will be it's own connection and that will exhaust db resources real fast, you're supposed to have a pooling proxy between lambdas and RDS.


I asked ChatGPT the same question again and then followed up with:

“Is there a more efficient way to connect to the database”

It then went off the rails. It gave me generic answers about using connection pooling that your language framework provides.

Of course that doesn’t help with each Lambda being invoked separately.

I then asked “But each Lambda runs based on its own vm”

It then correctly said that while each Lambda invocation would have its own pool and couldn’t be shared, it could be shared with multiple connections during one invocation.

Which is technically correct. But not the answer I was looking for.

In all fairness, it’s the same type of answers I would expect from a junior to mid developer with limited experience with AWS. I would hire a person who could answer that well.


Any time you ask ChatGPT a technical question on something you're not familiar with it should be three parts:

- What are the considerations?

- Implement it with the considerations

- Did we forget anything?

It understands the need for a proxy from step 1:

https://chat.openai.com/share/7ca37130-a771-457c-8742-a5f941...


That technique didn’t work when I asked it to create a Python script to return all IAM roles with a certain set of policies attached.

It still missed using a paginator to handle the list_roles call returning more than 50 roles.

Once I pointed it out, it did add pagination support.


I try not to cheat and hint at specific issues (since that relies on prior knowledge)

I'd be surprised if even after the last prompt it wouldn't notice that. Saying "Did we miss anything" leaves it open it to re-evaluate both the implementation and the original considerations

Edit: There's some non-determinism involved, but GPT-4 caught the pagination from planning stage here: https://chat.openai.com/share/3c356d4f-15d4-4f6e-bd29-af6a0b...


Funny enough, even in my original question, it put pagination as a consideration. But it still didn’t include it.


I do short quote-posts on my blog and wrote this one up.


Yes! Excellent approach, capturing.


And it's in the docs, RDS Proxy, launched in 2020, before GPT training cut-off

https://aws.amazon.com/about-aws/whats-new/2020/06/amazon-rd...


Cut off date shouldn't matter for RAG


Depends, if every lambda creates a single connection you can get to pretty high numbers before it becomes an issue.


Keep going down the rabbit hole…

“I’m a beginner. Walk me through step $n”


Same as pre-GPT: Labs. You still have to practice implementation (for now).




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: