That's cool and all, but can it do "in this existing code, please amend this feature"? Also useful, "please cover this code with meaningful unit tests"?
I've done a bunch of stuff with ChatGPT where I've pasted in code and asked it to make changes. The main limit there is the token limit - 8,000 I think for ChatGPT - so I'm very much looking forward to experimenting with the increased 32,000 limit when that becomes available.
I write tests for my code using Copilot a bunch - sometimes by pasting in the code I want to test and starting to type tests - it autocompletes them really effectively.
1. So, current and immediate future limits prevent this beyond relatively small projects?
2. But like, you still have to write tests though? If I need to think and check what the generated tests are doing, I'd rather write them myself to be honest. I want the time-consuming parts automated, not typing code. Maybe I am in minority. Surely generating 99% complete unit tests should be some sweet-spot for ChatGPT? I would imagine this AI can ignore halting problem, and somehow traverse all possible states anyway?
Absolutely. Using LLM code without reviewing it is an even worse idea than accepting a PR from a brand new apprentice-level programmer at your company without reviewing it.
For e.g. you cannot say go to VSCode and add this feature. But, If we point it to the piece of code and ask it to append it and write tests, it does really well. (We want engineers to have control haha.)
Probably. GPT3.5 was really good at writing unit tests. I asked it to unit tests for some typescript code using jest and aws-sdk-mock. It did it as I would do. I really couldn't fault it.
You feed it design specifications as input, the same way that a human SDET would do it. The challenge, as always, is to have well-written specification docs.