> Unity’s Demo productions drive advanced use of the Unity real-time 3D platform through autonomous creative projects, led by Creative Director Veselin Efremov.
I was going to say that the overlaps between one lock of hair to another, between hair and background/hair and skin, and the edges of lips and teeth looked a little poorly keyed, like the objects had high resolution textures but the surface map/motion rig was low-poly...which would be annoying but probably easy enough to ignore in an indie film.
But live? On a single (hard to get) consumer GPU? That's seriously impressive. It makes me wonder how much of this is hand-tuned rigging and how much is physics based; if you tried to shake hands with this digital human using a game controller or VR rig, how would that look?
Wait till you see Unreal's Matrix demo running live on a PS5 (even with interactive parts) instead of just some hand-crafted marketing material from Unity on YouTube yet again.
Yes, and did you actually take a look into that? I made the point in another comment: What they provide is either cherry picked assets, doesn't even work properly until you do quite a lot of plumbing or it shows how much is actually "hand crafted" and not using engine specific things. For example you import a captured model and animation that is basically just keyframed vertices, while they claim how cool and procedural their animation toolchain is ... That's not how this works. Unreal does this kind of specific customization for demos, too ... but on a much lower scale. Unity demos barely show actual engine features but are just made to impress the general layperson.
I remember for a long while everything was "Toy Story" quality in real time, the PS2, the PS3, etc. It never really was.
But at some point, we definitely passed it. The room is nifty, but mostly been done. But that is a pretty good person. Lip sync is a bit off somehow... I think perhaps just too overexaggerated in the motions. But I couldn't tell you from a still frame that wasn't a real person, in real clothes.
I also continue to find it amusing that we can build a person like that and sell it as commercial tech, but we still have to record people talking. (Though TTS has taken an interesting turn lately, after years of not much.)
> I also continue to find it amusing that we can build a person like that and sell it as commercial tech, but we still have to record people talking. (Though TTS has taken an interesting turn lately, after years of not much.)
I suspect part of the problem is that dialog in games is still largely "static"; if the writing is pre-canned then it does not make that much sense to try to develop advanced tts to act it out. The situation will become interesting if we manage to produce sufficiently dynamic dialog system where it is not feasible to use pre-recorded voice acting anymore.
Tangentially related: Why can’t movie studios use a SnapChat-like filter to replace the mouth movement of foreign language films with the mouth movement of the voiceover actors? I feel like that technology definitely exists. There are so many great foreign movies & shows but the voiceover + unmatched mouth movement can be so distracting. Is it just too expensive?
I wouldn't want to watch something like that, I want to watch the original film. Then again I also never watch dubbed movies/tv, I prefer reading subtitles.
Does this mean this was rendered in real-time?