I'd expect it to accidentally invent vulnerabilities of its own as well as pasting existing ones from the input set. AI provides no guarantees at all about correctness.
Makes me think about Dijkstra being horrified to learn that the US was sending astronauts into space using unproven computer code [0]. Eventually, we'll have programmers trusting the AI generated code as much as we trust compiler generated code. Sometimes I think this will be the evolution of programming (like binary to assembler to compiled to interpreted to generated)... Other times I think about how we've grown used to buggier and buggier code and yet we press on.