>Imagine your photorealistic inversion AI putting a mole or a wrinkle in the face of somebody without any foundation in the actual hash. Just because it fits better to the trained data.
Seeing as AI was trained on 99999999999999 images of 9999 people, if the image in question is of one of those people, it's well conceivable that the AI will implicitly ID the person and attach their corresponding mole. Or in other words, it's possible a good portion of PhotoDNA's database is in the AI training set, so in principle there are cases where the AI does know.
There are only 144 Bytes in a PhotoDNA hash and they are used to identify the whole picture. This is definitely not enough data to identify a face reliably.
The proposed AI does not identify people and it will not report that it "found" the person in the training data. It does not know. And it won't tell you.
Assume twins, one is in the training data, one isn't. The one in the training data has a scar, the other one does not. We "invert" a picture of the twin without the scar and who is not in the training set. As you explained, the resulting image will have the twin from the data set including a highly detailed picture of the scar. And for some reason, that is a good thing.
You are attributing more to this AI than it conceivably can do. Even going as far as finding an excuse for putting false or unfounded data.
It is tremendously important to make clear: most (if not all) of current AI technology is not fit for forensic analysis beyond guiding humans in their own analysis.
Seeing as AI was trained on 99999999999999 images of 9999 people, if the image in question is of one of those people, it's well conceivable that the AI will implicitly ID the person and attach their corresponding mole. Or in other words, it's possible a good portion of PhotoDNA's database is in the AI training set, so in principle there are cases where the AI does know.