It will allow for more realistic emotions in current SD Model merges and fine tunes by generating frames correctly labelled with their associated emotions.
Most SD1.x/SDXL models images depict humans with the same expression so the frames generated by LivePortrait will help with training datasets.
I believe the Pixar animators in Toy Story 1 used facial expressions /emotions database called F.A.C.S to make the characters more humanly relatable.
It's not clear if the "expressions" will generalise to new faces
Most SD1.x/SDXL models images depict humans with the same expression so the frames generated by LivePortrait will help with training datasets.
I believe the Pixar animators in Toy Story 1 used facial expressions /emotions database called F.A.C.S to make the characters more humanly relatable.
It's not clear if the "expressions" will generalise to new faces