Hacker News new | past | comments | ask | show | jobs | submit login

> I can't remotely read the word without moving my eyes over to it

Plus the portion of your eye that sees in high resolution is ridiculously small—1% of your field of view. And your eye and brain just hide that fact by making your eye constantly jitter to see a larger area.




But, they aren't doing fovea tracking and shifting the display around in response to eye movements, are they? If not, you would only get more screen space if you articulate your neck and cause the head-tracking to update your viewing perspective.

I've monitored how I use 4K screens in the 28" size range. I would not normally move my head at all to look at different parts of the screen. I would only move my eyes. With two such screens side-by-side, I would turn my head just a little bit to focus on one screen or the other. Or, I found I would often hold one head position that is neutral between the two, and then use approximately 2/3 of each screen, with the far outer edges neglected or used to banish less relevant communications and status apps that I only check once in a while.

And, these screens are not filling my field of view by any means. So, I'd really need a far higher resolution headset if it is going to give me a good immersive field of view and be a reasonable monitor-replacement. I fear that fovea-tracking will remain scifi dreams in my lifetime, so the reality is we need to render full resolution throughout the field of view to be prepared for where the eye gaze might go in the next frame.


> I fear that fovea-tracking will remain scifi dreams in my lifetime, so the reality is we need to render full resolution throughout the field of view to be prepared for where the eye gaze might go in the next frame.

This is not at all true. All of the AVP, the Quest Pro, and the PSVR2 do eye tracking based foveated rending. They lower the clarity of the things you're not looking at. And reviewers say it works perfectly, like magic. They are unable to "catch" the screen adjusting for what they're looking at.


Hmm, interesting...

Are they actually doing some kind of optical shifting of a limited pixel display? Or you just mean they do some kind of low-res rendering mode to populate most of the framebuffer outside a smaller zone where they do full quality rendering?

In other words, are they just allocating rendering compute capacity or actual pixel storage and emission based on foveal tracking?


> Or you just mean they do some kind of low-res rendering mode to populate most of the framebuffer outside a smaller zone where they do full quality rendering?

This exactly. They don't reduce the resolution too much, but it's visible to an outside observer watching on a monitor.


That’s just to be about to have enough compute/bandwidth to drive the display. Other posters are correct that the DPI decreases away from the center and various optical aberrations increase. Foveated rendering won’t help with that.


Fovea-jitter is far from covering the width of your vision. They can overshoot by (maybe, I'm guessing) 2-3x the are of your precise vision and all jitter will be covered.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: