Could you explicitly train a set of embeddings that performed that step in the process? For example which computing the loss, you compare the difference against the normalized text rather than the original. Or alternatively do this as a fine-tuning. Then you would have embedding that optimized for the characteristics you care about.