I remember when you could run Doom over a lan. You'd set one monitor to be your forward view, and then set two more monitors to be your left and right views.
> The higher level d_net.c file contains the system-independent portion of the networking code. The code has separate concepts of node and player: a node is simply a computer connected to the game. Each player has a corresponding node, but a node does not necessarily have a player. This was used to implement the "three monitor" system which existed in early versions of Doom: the "left" and "right" screens were nodes but not players.
I only played that way a couple of times but it was fun.
It was taken out of the game in later patches because of a netcode rewrite; as mentioned in the wiki article, the original netcode implementation brought IPX networks to their knees. Chocolate Doom re-implements this feature:
Damn, I hadn’t been paying attention to the panorama computation world in the last few years, and hadn’t heard of this. This is indeed totally awesome. Makes me want to buy a fisheye lens and go play around.
I wonder how well such projections would work in a VR headset context.
In any event, they should make for really great demos of e.g. 3d architectural models, or video walkthroughs of wide spaces, if shown in high resolution on a large enough screen.
Note that nobody seems to know the "correct" spelling of Giovanni Paolo Panini's last name.
Edit: Looks like he's spelled Panini.
"Arisi showed that the spelling Pannini was a corrupted inscription identifying the artist, rather than a signature, as all of his known signatures read simply Panini." - Mary L. Myers, Architectural and Ornament Drawings, p.40
Think you'll have to use a separate app for now like hugin, which should exist for your platform. Also things will depend on your fisheye's projection. Some discussion at http://www.dpreview.com/forums/thread/3392175
Wow, this is really clever. I've been thinking a lot about panoramic rendering -- that is, rendering panorama shots directly in a raytracer/raymarcher -- and I may take this technique more or less directly. When rendering with a ray-based approach (rather than raster-based), there should be no need to postprocess; simply map the globe onto the lens, and use that to determine your ray direction, making this lossless and effectively free.
Can't wait to play around with this -- great work!
The interesting thing about rendering a full (spherically-complete) panorama is that you can render the static aspects of the scene once, and you don't have to update until the view strafes in the XYZ direction (i.e. mouselook is free). I think this would be particularly useful for situations where you are locked in a seat (i.e. you could render the full cockpit once).
it's hard to believe we have 170 degree vision (seeing as we look straight ahead). so I tried it. I stood up, with my arms stretched out and back slightly (i.e. open 200 degrees) looking at a point straight ahead. I wiggled my fingers but couldn't see them. Keeping my arms outstretched, I then continued looking ahead at the point and wiggling my right hand and moving my right arm forward slowly (without bending my elbow) until I noticed the wiggling. Then still looking at the same point ahead I repeated with my left arm, again without bending my left elbow. After both were in my peripheral vision I checked to see where my arms were.
Sure enough, my arms were nearly stretched out 180 degrees! They were only forward ever so slightly off of straight horizontal.
It's insane that we have that kind of peripheral vision. Try it!
I'm waiting for AR googles that give me 360 vision: balancing a highly detailed point of focus with wrap around peripheral awareness. Never accidentally cut someone off on the sidewalk again.
And if you're going electronic, you might as well get the full 360-degree vertical too - just see everything all the time! Use this projection: http://www.pouet.net/prod.php?which=27000
Oh man I wish those were available for purchase. Actually I may just try to make something like this myself. One of the main times I regret having only one eye is when I'm changing lanes on my bike in traffic. (I've made my peace with not being able to play ping-pong.) My sore neck would definitely appreciate this.
I've been thinking about this from a long time now, but never researched it.
Does that mean you can't do it with some particular view matrix ?
I always wondered if it's possible to have a very low fov near the center, but still account for a large enough part of the screen, so that the fov progressively increase as the image is far from the center of the camera. That way you would have high details near the center of the camera, but more view angle near the edges.
All the translations your graphics card can perform are affine [1]. One effect of this is, that lines always stay lines. If you have a look at the pictures, you will see, that the Pannini projection does not preserve lines. So no – this can not be done with a view matrix.
Yep. If you look at the "How?" and "Performance vs Quality" sections, you can see that you need to render the scene six times to get the surrounding environment, so all you would just need to do is make a shader for the projection[0]. (Rendering the scene six times is pretty common in graphics in order to generate light probes for dynamic lighting and global illumination.)
I suspect with a modern GPU implementation (Vulkan) and some other minor optimizations, this could run pretty easily in real-time.
Oh wow. If you follow the link to tksharpless, they have a YouTube video. I got dizzy when I stuck my nose right up to the screen. Next best thing after GearVR if you don't count the nausea from involuntary camera movement.. From further away it just looks distorted (like it should) because of FoV mismatch.
http://doom.wikia.com/wiki/Doom_networking_component
> The higher level d_net.c file contains the system-independent portion of the networking code. The code has separate concepts of node and player: a node is simply a computer connected to the game. Each player has a corresponding node, but a node does not necessarily have a player. This was used to implement the "three monitor" system which existed in early versions of Doom: the "left" and "right" screens were nodes but not players.
I only played that way a couple of times but it was fun.