This is only going to get worse with Large Language Models. Let's imagine a somewhat knowledgeable individual, could craft both emails, messages and even commits with a bunch of prompts. Those will relate deeply to the project.
Maybe one day it will happen, but right now LLM-generated persona would likely set off every alarm bell for a lot of people. LLMs have very recognizable style, and it usually falls right into the uncanny valley.
The "recognizable style" that people usually refer to is the default persona that most are exposed. However, the style can be changed very drastically with some fairly simple prompting.
I don't think this is going to be a big issue. Those attacks have to be high-profile attacks. If you look at the xz backdoor, there was some top notch engineering behind it.
If we ever reach a level of LLMs being able to do that, we don't need any open source contributors any more. We just tell tell the LLM to program an operating system and it will just do it.