Hacker News new | past | comments | ask | show | jobs | submit login

Eh... the author doesn't know how to compare solid angles.

Anyway, I wonder what happens when VR manufacturers increase FOV to the point where the distortion from the traditional perspective transform (homogeneous linear transform) becomes impractical due to distortion towards the edges (example: https://www.youtube.com/watch?v=ICalcusF_pg).

You need >120 FOV for an immersive experience, but current graphics pipelines are built on the assumption that straight lines in world space map to straight lines in screen space, so you can't do proper curvilinear wide-angle perspective with the existing triangle rasterizer architecture.




>"straight lines in world space map to straight lines in screen space"

Then you post-process the screen space, with something like a fish-eye shader to warp your image appropriately. Sure you'll lose some resolution near the borders but the human eye won't care because it's not in its high resolution area.


The problem is without eye tracking, nothing stops you from just looking at the borders and seeing artifacts.


When I was using VR a lot I just kind of stopped looking with my eyes outside of a small area around the middle and mostly used my neck. When looking towards the edge doesn't look great I think most people adapt and just start subconsciously working around the limitations. Maybe it partially comes from being a glasses wearer almost all my life (since around 7-8) so my vision is already basically useless for detail around the edges but it wasn't a huge jump for me.


In practice, people don’t make incredibly long saccades. Instead, there’s some coordination so that the neck or body movements do the bulk of the work.


Then you render with multiple cameras rotated around their centre, using the same techniques as cubemap rendering. With high field of view, rendering a single spherical view with homogeneous resolution would be inefficient for VR, not only due to the lack of hardware support (without noticeable artefacts or some subdivision scheme) and the created distortion, but also due to the foveated nature of human vision. You'd want a small view frustum rendered at a much higher pixel per angle blended with a lower resolution rendering of the entire field of view (optimally with a medium resolution view frustum surrounding the smaller one for a more progressive blending). See: https://www.youtube.com/watch?v=lNX0wCdD2LA

With current GPUs and game engines, you need multiple cameras to achieve such effect, but it has the advantage of drastically lowering the computing demands for high-resolution VR graphics (compare 8k per eye, vs just two blended 1080p views).

You could also ray trace a single view of non-homogeneous resolution that directly takes into account the distortion characteristics of the HMD lenses, but it would likely a lot less efficient than traditional rasterization with multiple blended cameras.


You need >120 FOV for an immersive experience

Where does this come from? Is this a reference to research? I find the 110-120 degree FOV is pretty immersive in current day VR.

Does it have to do with: "Objects that we don't first (subconsciously) notice in our peripheral vision, we simply don't believe." (From another thread.)


Human vision has 120 degree FOV when looking forward. However you can also move your eyes, so it has to be >120 if you don't want to look at the edges of the headset.


> but current graphics pipelines are built on the assumption that straight lines in world space map to straight lines in screen space, so you can't do proper curvilinear wide-angle perspective with the existing triangle rasterizer architecture.

Isn't that approximation still fine as long as a single triangle doesn't span more than, say, 5 degrees of your FOV? If you are trying to represent a real scene, your triangles will rarely get that big. If you really want to display huge triangle, then you can just sub-divide it into a thousand smaller triangles.


You should still take it into account because when you move your head around it causes things to get warpy and it may be at best weird and at worst dizzy.


Check out 360 degree Panorama formats, especially "Cubic" [1]. A similar approach was used back in 1992, in CAVE [2], using 6 virtual cameras.

[1] https://wiki.panotools.org/Panorama_formats#Cubic [2] https://en.wikipedia.org/wiki/Cave_automatic_virtual_environ...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: