This is amazing. I can immediately see it being used by StableDiffusion and other generative image communities. It gives life to those lifeless faces and it doesn't look outstandingly odd. Not to my eyes at least.
It will allow for more realistic emotions in current SD Model merges and fine tunes by generating frames correctly labelled with their associated emotions.
Most SD1.x/SDXL models images depict humans with the same expression so the frames generated by LivePortrait will help with training datasets.
I believe the Pixar animators in Toy Story 1 used facial expressions /emotions database called F.A.C.S to make the characters more humanly relatable.
It's not clear if the "expressions" will generalise to new faces
Edit: it's definitely being used already https://www.reddit.com/r/StableDiffusion/comments/1dvepjx/li...