Hacker News new | past | comments | ask | show | jobs | submit login

The video display is probably the very simplest part of the Virtual Reality experience. While it's directly responsible for a large part of the experience, it's not actually what needs to be standardized.

Everything else, from the various ways to perform head and motion tracking, to the myriad of input devices and controllers, varies wildly. Coming up with a standard cross-section of these features that most platforms support seems like an excellent idea to me.




> a standard cross-section

I don't envy the people doing this, in terms of controllers:

* There's Leap Motion Orion. Plausibly, there is some future tech involving body tracking.

* Valve doesn't know what the final form of the controller is[1], the current controllers are merely one of the better iterations.

* What we don't know.

There seems to be a need for multiple cross-sections. With the current state of flux in the field, I'm expecting controller to be highly vendor-extended.

[1]: https://youtu.be/kMpQWSqQFK0?t=1m9s


Maybe also audio (binaural), haptic feedback and eyetracking (foveated rendering) could be included.


are head and motion tracking not inputs to determining the eye position/orientation? Are they things the develop should ever have to interact with directly?

for controllers you will need the position/orientation information as well - but that's not much for an API to do.

Again, I feel like I'm missing something




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: