Hacker News new | past | comments | ask | show | jobs | submit login

Apple does not provide eye tracking data. In fact, you can’t even register triggers for eye position information, you have to set a HoverEffectComponent for the OS to highlight them for you.

Video passthrough also isn’t available except to “enterprise” developers, so all you can get back is the position of images or objects that you’re interested in when they come into view.

Even the Apple employee who helped me with setup advised me not to turn my head, but to keep my head static and use the glance-and-tap paradigm for interacting with the virtual keyboard. I don’t think this was directly for security purposes, just for keeping fatigue to a minimum when using the device for a prolonged period of time. But it does still have the effect of making it harder to determine your keystrokes than, say, if you were to pull the virtual keyboard towards you and type on it directly.

EDIT: The edit is correct. The virtual avatar is part of visionOS (it appears as a front camera in legacy VoIP apps) and as such it has privileged access to data collected by the device. Apparently until 1.3 the eye tracking data was used directly for the gaze on the avatar, and I assume Apple has now either obfuscated it or blocks its use during password entry. Presumably this also affects the spatial avatars during shared experiences as well.

Interestingly, I think the front display blanks out your gaze when you’re entering a password (I noticed it when I was in front of a mirror) to prevent this attack from being possible by using the front display’s eye passthrough.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: