> But if someone uses this to do 90% of the work and then just edits it to make it personal and sound like themselves, then it's just a great time saving tool.
This is still way too optimistic. Reading through something that's "almost right", seeing the errors when you already basically know what it says / what it's meant to say, and fixing them, is hard. People won't do it well, and so even in this scenario we often end up with something much worse than if it was just written directly.
There is a lot of evidence for this, from the generally low quality of lightly-edited speech-to-text material, to how hard it is to look at a bunch of code and find all of the bugs without any extra computer-generated information, to how hard editing text for readability can be without serious restructuring.
Just train another AI model to do it then! I'm not joking -- Stable Diffusion generates some pretty grotesque and low quality faces, but there are add-on models that can identify and greatly improve the faces as part of the processing pipeline.
Doesn't seem like a stretch to have similar mini-models to improve known deficiencies in larger general models in the textual space.
Gmail's autocomplete already works great for this, and it will only get better over time. The key is to have a human in the loop to decide whether to accept/edit on a phrase by phrase or sentence by sentence basis.
This is still way too optimistic. Reading through something that's "almost right", seeing the errors when you already basically know what it says / what it's meant to say, and fixing them, is hard. People won't do it well, and so even in this scenario we often end up with something much worse than if it was just written directly.
There is a lot of evidence for this, from the generally low quality of lightly-edited speech-to-text material, to how hard it is to look at a bunch of code and find all of the bugs without any extra computer-generated information, to how hard editing text for readability can be without serious restructuring.