Maybe information needs to be understood relationally as in "information for a subject x". So if we have an image with a license plate that is unreadable and there's an algorithm that makes it readable to x, there is an information gain for x, although the information might have been in the image all along.
If the license plate was not readable, then the additional information is false data. You do not know more about the image than you knew before by definition.
Replacing pixels with plausible data does not mean a gain of information.
If anything, I'd argue that a loss of information occurs: The fact that x was hardly readable/unreadable before is lost, and any decision later on can not factor this in as "x" is now clearly defined and not fuzzy anymore.
Would you accept a system that "enhances" images to find the license plate numbers of cars and fine their owners?
If the plate number is unreadable the only acceptable option is to not use it.
Inserting a plausible number and rolling with it even means that instead of a range of suspects, only one culprit can be supposed.
Would you like to find yourself in court for crimes/offenses you never comitted because some black box decided it was a great idea to pretend it knew it was you?
Edit: I think I misunderstood the premise. Nonetheless my comment shall stay.
Sure, but what if the upscaling algorithm misinterpreted a P as an F? Without manual supervision/tagging, there's an inherent risk that this information will have an adverse effect on future models.