Hacker News new | past | comments | ask | show | jobs | submit login

Those 12ms affect the latency, not the framerate. The thing will definitely not render at just 60Hz as that's too low for VR, the standard is usually 90Hz or 120Hz.



If you divide 1 second by 60, you get 16ms. So to hit 60 Hz, you need to complete all processing and draw the frame within 16ms. For 120Hz, like you're claiming, all processing needs to be completed in half the time, or 8ms. And yet, Apple says the R1 takes 12 ms to process input from the sensors? You can draw your own conclusions.


You forget that the processing doesn't have to finish within the same frame. Latency is not throughput.

Not even the most expensive high-end gaming setups can finish the entire input-to-screen processing within just one frame, and yet they can easily render some games at 500Hz or more.


Nothing about end-to-end latency of the R1 tells you anything about how pipelined it might be. It very well could have multiple frames in-flight at the same time in different stages of processing.


To provide a comfortable experience the frame pipeline can't be very deep. The older the frame state compared to the wearer's current proprioception the more likely they are to experience discomfort or outright motion sickness.

That's why I assume the R1 is trying to provide the render pipeline with "current" positional state and then tries to finish the drawing in the remaining 4ms (for 60fps) then the display is only going to lag the wearer's perception by 16ms which is less likely to cause discomfort.

This could be mitigated more if the objects in the VR scene are tagged with motion vectors. If the R1 state update doesn't land in time the renderer can interpolate the "current" position by applying those motion vectors to objects in the scene.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: