Hacker News new | past | comments | ask | show | jobs | submit login

Keyframe is only half creepy. The left eye was the one filled in by the algorithm. The right eye is filled in similarly a few seconds later. For human faces it seems to substitute very generic replacements of features (and take no more cue from the surrounding photo than: eye shaped thing should go here, chin shaped thing should go here, brown hair should go here, etc. The video is definitely worth watching. For human faces it seems to take less cue from surroundings than inanimate scenes. Although it probably just seems that way because we are so sensitive to peculiarities in images of the face.

Edit: it's somewhere between a surrounding texture fill and a semantic / context based reconstruction. Texture fill would produce blank skin for an eye. Ideal reconstruction would take into account appropriate wrinkles, symmetry, expected bone structure. It works better for still life / scenes than for faces.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: