Hacker News new | past | comments | ask | show | jobs | submit login
Peripheral vision in Quake (github.com/shaunlebron)
231 points by undershirt on March 10, 2015 | hide | past | favorite | 25 comments



I remember when you could run Doom over a lan. You'd set one monitor to be your forward view, and then set two more monitors to be your left and right views.

http://doom.wikia.com/wiki/Doom_networking_component

> The higher level d_net.c file contains the system-independent portion of the networking code. The code has separate concepts of node and player: a node is simply a computer connected to the game. Each player has a corresponding node, but a node does not necessarily have a player. This was used to implement the "three monitor" system which existed in early versions of Doom: the "left" and "right" screens were nodes but not players.

I only played that way a couple of times but it was fun.


It was taken out of the game in later patches because of a netcode rewrite; as mentioned in the wiki article, the original netcode implementation brought IPX networks to their knees. Chocolate Doom re-implements this feature:

http://www.chocolate-doom.org/wiki/index.php/Three_screen_mo...


Wow this Pannini projection is awesome and I had never heard of it before!

One of the first links of the article: http://tksharpless.net/vedutismo/Pannini/


Damn, I hadn’t been paying attention to the panorama computation world in the last few years, and hadn’t heard of this. This is indeed totally awesome. Makes me want to buy a fisheye lens and go play around.

I wonder how well such projections would work in a VR headset context.

In any event, they should make for really great demos of e.g. 3d architectural models, or video walkthroughs of wide spaces, if shown in high resolution on a large enough screen.


[deleted]


It's PaNNini, not "panini". Two Ns together, three Ns in total. Pannini.


Note that nobody seems to know the "correct" spelling of Giovanni Paolo Panini's last name.

Edit: Looks like he's spelled Panini.

"Arisi showed that the spelling Pannini was a corrupted inscription identifying the artist, rather than a signature, as all of his known signatures read simply Panini." - Mary L. Myers, Architectural and Ornament Drawings, p.40


Neither have I. Anyone know how to use this in Lightroom on my own wide-angle photos?


Think you'll have to use a separate app for now like hugin, which should exist for your platform. Also things will depend on your fisheye's projection. Some discussion at http://www.dpreview.com/forums/thread/3392175


Wow, this is really clever. I've been thinking a lot about panoramic rendering -- that is, rendering panorama shots directly in a raytracer/raymarcher -- and I may take this technique more or less directly. When rendering with a ray-based approach (rather than raster-based), there should be no need to postprocess; simply map the globe onto the lens, and use that to determine your ray direction, making this lossless and effectively free.

Can't wait to play around with this -- great work!


The interesting thing about rendering a full (spherically-complete) panorama is that you can render the static aspects of the scene once, and you don't have to update until the view strafes in the XYZ direction (i.e. mouselook is free). I think this would be particularly useful for situations where you are locked in a seat (i.e. you could render the full cockpit once).


See a video demo at https://www.youtube.com/watch?v=jQOJ3yCK8pI that shows these various different kinds of projections (from this code).

The video dates to 2011, but the repository has taken a lot of additional commits since then.


Feels like a very bad fever. Funny how linear projections don't scale above 200, while ~non euclidian~ ones have no troubles.


it's hard to believe we have 170 degree vision (seeing as we look straight ahead). so I tried it. I stood up, with my arms stretched out and back slightly (i.e. open 200 degrees) looking at a point straight ahead. I wiggled my fingers but couldn't see them. Keeping my arms outstretched, I then continued looking ahead at the point and wiggling my right hand and moving my right arm forward slowly (without bending my elbow) until I noticed the wiggling. Then still looking at the same point ahead I repeated with my left arm, again without bending my left elbow. After both were in my peripheral vision I checked to see where my arms were.

Sure enough, my arms were nearly stretched out 180 degrees! They were only forward ever so slightly off of straight horizontal.

It's insane that we have that kind of peripheral vision. Try it!


Also related, I learned a lot about peoples perception of FOV in fps games from this reddit thread: http://www.reddit.com/r/Games/comments/2yi4yx/blizzards_stan...


I'm waiting for AR googles that give me 360 vision: balancing a highly detailed point of focus with wrap around peripheral awareness. Never accidentally cut someone off on the sidewalk again.


Here's a sunglasses concept that extends your peripheral vision with just a lens, no electronics needed. http://objects.designapplause.com/2010/nike-hindsight-glasse...

And if you're going electronic, you might as well get the full 360-degree vertical too - just see everything all the time! Use this projection: http://www.pouet.net/prod.php?which=27000


Oh man I wish those were available for purchase. Actually I may just try to make something like this myself. One of the main times I regret having only one eye is when I'm changing lanes on my bike in traffic. (I've made my peace with not being able to play ping-pong.) My sore neck would definitely appreciate this.


I've been thinking about this from a long time now, but never researched it.

Does that mean you can't do it with some particular view matrix ?

I always wondered if it's possible to have a very low fov near the center, but still account for a large enough part of the screen, so that the fov progressively increase as the image is far from the center of the camera. That way you would have high details near the center of the camera, but more view angle near the edges.


All the translations your graphics card can perform are affine [1]. One effect of this is, that lines always stay lines. If you have a look at the pictures, you will see, that the Pannini projection does not preserve lines. So no – this can not be done with a view matrix.

[1] http://en.wikipedia.org/wiki/Affine_transformation#Propertie...


Can't it be done just with a shader ? Why did he use quake to implement it ?


>Can't it be done just with a shader ?

Yep. If you look at the "How?" and "Performance vs Quality" sections, you can see that you need to render the scene six times to get the surrounding environment, so all you would just need to do is make a shader for the projection[0]. (Rendering the scene six times is pretty common in graphics in order to generate light probes for dynamic lighting and global illumination.)

I suspect with a modern GPU implementation (Vulkan) and some other minor optimizations, this could run pretty easily in real-time.

[0] - http://tksharpless.net/vedutismo/Pannini/panini.pdf


Oh wow. If you follow the link to tksharpless, they have a YouTube video. I got dizzy when I stuck my nose right up to the screen. Next best thing after GearVR if you don't count the nausea from involuntary camera movement.. From further away it just looks distorted (like it should) because of FoV mismatch.


What is the difference between setting a high FOV and using a Pannini or fisheye projection ?

Does that mean there is less distortion towards the center ?

I did not watch the video yet...

How difficult would it be to integrate a 170 pannini camera using shaders instead of using quake ? I don't understand why he used quake for this...


This is fantastic. I found myself wanting to scream Byakugan! while watching the demo... :p


:o

Now I want Pannini Projection converter for my gopro :)




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: