Hacker News new | past | comments | ask | show | jobs | submit login

It feels like Apple has solved this problem with the Vision Pro? So just wait for a couple more months?



Except Apple has stated they will encrypt that information and not supply it to apps to avoid targeting or fingerprinting. And second how would that help someone with ALS?


> And second how would that help someone with ALS?

They won't share information on where you look, but they will share info on where you 'click', which is used to navigate apps. IIRC you are supposed to use your fingers to do this, and other actions, but I imagine that Apple's accessibility team will have an alternate mode for people with motor limitations. It could be a long-blink, or rapid blinking, for example.


Even the large headset is a no go for als sufferers.


People use stuff like this in VRChat

VR Eye-Tracking Add-on Droolon F1 for Cosmos(Basic Version) https://a.co/d/asAAZwT

You can use it with any headset cause the data it provides is independent of the headset. Not sure how accurate that one is in particular.

Eye tracking data is not particularly complex nor super sensitive in nature. Once the sensor is calibrated it just sends over two vectors.


The context here is ALS; someone with only the ability to move their eyes may not have a couple of months free to wait.


Also donning the AVP headset might not be possible for someone with advanced ALS. A fixed outside-in apparatus (probably attached to the patient’s wheelchair) would make more sense.


I think that makes sense but a startup is going to have a hard time delivering something better between now and January if they start from scratch.

But maybe I’m wrong, it might be a purely software based problem that can be solved quicker.


> I think that makes sense but a startup is going to have a hard time delivering something better between now and January if they start from scratch.

I agree entirely!


not really.

There are a bunch of hardware out there to get gaze vectors, the problem is that they rely on a generalised model of the eye. With fine tuning you can go from >5 degree of error, to <1-2




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: