It's very cool, but from an auditing perspective, it's a nightmare. As a reviewer, I can't reason about the code in the same way that I could reason about human code, since there is no coherent formulation of the accomplished task. I can't say "why did it apply CORS to the entire flask app?" and expect reasoning that will fulfill my objective as a reviewer.
So while it could help blast out large swaths of code quickly, it still needs an expert at the wheel to be accountable for the changes to reviewers.
> I can't say "why did it apply CORS to the entire flask app?" and expect reasoning that will fulfill my objective as a reviewer.
I'm not saying you're wrong, but can't you just ask the AI to include a comment explaining why it chose to apply CORS to the entire app? You can just keep asking it questions and maybe its reasoning would check out for most of them.
> just keep asking it questions and maybe its reasoning would check out for most of them.
But the AI isn't reasoning... is it? Perhaps it could give an explanation, but you couldn't (currently) conflate that with any actual understanding of why it did what it did?
It's not reasoning like a human, but if it's using code it memorized from the past it might be able to string along comments it memorized from the past into something that is relevant for the context it's being asked for.
So while it could help blast out large swaths of code quickly, it still needs an expert at the wheel to be accountable for the changes to reviewers.