Hacker News new | past | comments | ask | show | jobs | submit login

The only required piece of information to render the scene is the position of the eye relative to the phone, which can be represented as an x and y coordinate relative to the rectangular frame captured by the front-facing camera. The accelerometer and gyroscope shouldn’t be needed, except perhaps to smooth out movement of the phone in the user’s hand if the camera’s frame rate is too low.

My guess is that the only reason this demo is relying on the iPhone X is that Apple provides a facetracking API, so there’s no need to write any code or use external libraries to do eyetracking with a normal front facing camera.




To really get a nice 3D effect you also need distance of the eyes from the phone. Based on the video in the article, this demo actually uses this depth information. You can't get this depth info easily from a camera. I can certainly see how the more advanced optics in the iPhone X really help here.


I suspect the size of the phone screen isn’t large enough for distance information to affect the rendered output significantly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: