Hacker News new | past | comments | ask | show | jobs | submit login

> The trip from video encoding on the PC, over the WiFi, and display in the headset averages about 3ms for me, using a 5ghz access point that’s several years old.

Wow that's pretty impressive, especially considering the 4K main screen + additional virtual screens used. I wonder how you get the latency this low. 3ms means no buffering at all, neither during encoding nor decoding. The best I could achieve so far is a little less than 60ms but with the additional restriction that playback has to be done via browser.




Yeah, that's impressive. A little too impressive.

Does OP have any documentation on how this latency can be achieved and measured?


Yes I do!

1. WiFi direct, or as direct as you can make it - which means the computer is either the AP itself, or wired directly to it in order to keep the hops short.

2. Measurement can be done via a companion ping that looks at frame timing, but that's kind of a secondary measure. Best way to really check it is with high-speed video capture of a hardware display and the mirrored content in the headset, of a high-(temporal-)resolution stopwatch. I can also watch a YouTube video streamed with the desktop to the headset, and listen to it on headphones wired straight to the laptop, with no discernible offset (and this is with years of audio engineering experience) - the error bars on that subjective experience are much larger, but demonstrate that it's at least usable.

3. Hardware acceleration, via Nvidia NVENC, and rather than use Immersed's own virtual screen tech, I usually rely on HDMI dummy plugs ($12 for a 3 pack) to maximize GPU throughput (and not cannibalize CPU cycles).

4. Not ALL content is going to move at this speed even in the best of circumstances - I have a low overall rate of change, which means the delta streaming of the NVENC encoder is super short: not much data to grab, not much data to send.

I can absolutely get worse rates than this if I want to push a ton of video content, or if I were to use it for gaming, etc. I don't think everyone would have this level of performance for every use case, but it's what I've been able to achieve and maintain.


Thanks for the insights!

> Best way to really check it is with high-speed video capture of a hardware display and the mirrored content in the headset, of a high-(temporal-)resolution stopwatch.

Agreed, it's a little cumbersome but afterwards you can be pretty confident in your result. Have you measured 3ms like that? Getting below 3ms with small deltas using NVENC sounds possible but considering that even with a refresh rate of 120fps there is over 8ms between two frames, 3ms does sound suspiciously low.

One last question, have you done any experiments with the video codec, due to compatibility requirements I am pretty much limited to H.264 but was wondering if there is something better for super low latency encoding?


Thank you for a good response, which takes into consideration several of my concerns.

My advice would be to both include these points and demonstrate this (perhaps with a video) in your article, as latency is IMHO correctly perceived as the killer of several concepts, such as remote game streaming services and host-to-goggles interactivity. This is especially important as you're using a cheapish consumer device as your primary display.

Do include the circumstances that contradict your 3ms claim. You do this elsewhere in your article (the gray old computers example) but it will serve to bolster your overarching argument.


I had major issues getting the latency down to something reasonable, so didn't end up trying immersed for more than 10 minutes.


How long ago was that? 18 months ago it was too slow for most use, and it's made massive improvements since then.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: