Hacker News new | past | comments | ask | show | jobs | submit login

Yes, it can do both of those things.

I've done a bunch of stuff with ChatGPT where I've pasted in code and asked it to make changes. The main limit there is the token limit - 8,000 I think for ChatGPT - so I'm very much looking forward to experimenting with the increased 32,000 limit when that becomes available.

I write tests for my code using Copilot a bunch - sometimes by pasting in the code I want to test and starting to type tests - it autocompletes them really effectively.

I wrote a bit about that here: https://til.simonwillison.net/gpt3/writing-test-with-copilot




1. So, current and immediate future limits prevent this beyond relatively small projects?

2. But like, you still have to write tests though? If I need to think and check what the generated tests are doing, I'd rather write them myself to be honest. I want the time-consuming parts automated, not typing code. Maybe I am in minority. Surely generating 99% complete unit tests should be some sweet-spot for ChatGPT? I would imagine this AI can ignore halting problem, and somehow traverse all possible states anyway?


Absolutely. Using LLM code without reviewing it is an even worse idea than accepting a PR from a brand new apprentice-level programmer at your company without reviewing it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: