Hacker News new | past | comments | ask | show | jobs | submit login
Tricks Make Virtual Reality Feel Real (2016) (nautil.us)
84 points by dnetesn on April 21, 2018 | hide | past | favorite | 15 comments



Real-time graphics in general, not just VR, has always had "tricks" at its core. What has drawn me to that subfield of graphics is I think the collection of 'hacks' and deceptions that are needed for the user/viewer to buy into the virtual environment, whether on a desktop screen or in an HMD. There's an aspect of cleverness about it all that I enjoy -- from billboarding faraway objects instead of rendering them as 3D to baking static shadows into textures before runtime.

I wonder if there have been any in-depth articles examining comparisons between real-time graphics and stagecraft. The design of props and backgrounds in the theatre has similar objectives -- not necessarily fooling the audience into thinking it's real, but using a collection of 'tricks' to extend the capabilities of the stage to achieve greater effects. When attending a performance of The Nutcracker a few months ago, I found myself fascinated by how the multiple layers of forced-perspective backdrops offered a greater sense of depth to the stage.


EDIT: Another thing this article makes me think about:

If you're interested in redirected walking, you might also be interested in the CyberMotion simulator [0]. I had the chance to see it in action when attending IEEE VR 2018. It's for motion sickness research, and it allows for mapping arbitrary motion into a limited robotic range of motion, in order to give a realistic sense of acceleration in different directions to the user.

[0] https://www.youtube.com/watch?v=ThkymYRP1g8


Reading the source code of a 3D engine can be similar to having a magic trick explained to you. At least, the first time I did, I found it quite disappointing or anti-climatic: oh, so this actually only works if you see it from this specific angle and you cannot move that light in real time, there are just two of them, ...


There's a similar anticlimax when looking at the big flashy publication videos at graphics conferences. Some new technique looks absolutely amazing, then you start to notice the hand-picked best conditions shown in the video (oh, this method only works in static scenes, when there's no motion blur, etc etc).


Now that's a useful article. Unusual for Medium. It even has citations.

Those tricks are used commercially in the "Star Wars Experience" at Disneyland.[1][2] They use a custom-built real-world space which is much smaller than the virtual space the players think they're in.

This isn't going to work in your living room, unless you have a really big living room. But it could work in a classroom-sized room, or a storefront-sized room, or a back yard. You don't need a gymnasium-sized room.

To make this a mass market product, it needs to work without a custom space. The VR gear needs to be able to detect obstacles in the real world. It needs a depth camera. Not much range is needed, maybe 4m or so, enough to detect anything you could run into, trip over, or fall off. The second generation Kinect would probably work.

Real-world obstacles need to appear in the virtual world when necessary, even if it breaks the illusion. If there's something you could trip over in the back yard, that has to show up in the VR world. Worst case, the system dumps the VR world entirely and just gives the user an unmodified view of the real world. This isn't augmented reality, which is reality with a little VR added. It's VR with just enough real world added for safety.

The technology used for the Star Wars experience costs about $10,000 per player. That has to come way down without degrading the experience quality.

[1] https://disneyland.disney.go.com/attractions/downtown-disney... [2] http://www.businessinsider.com/star-wars-secrets-of-the-empi...


> This isn't going to work in your living room, unless you have a really big living room.

I don't know about that. Unseen Diplomacy [0] uses similar tricks to simulate a much larger space than the size of the room. It requires a minimum area of 3m x 4m, which is within the size of many people's rooms (admittedly, I only managed to find this much space when I temporarily had no furniture in the room). It's quite an effective experience.

[0] http://store.steampowered.com/app/429830/Unseen_Diplomacy/


Unseen Diplomacy is really cool... Its tricks are also not going to let you play Skyrim like you are in a holo-deck.


But I don"t find it useful enough. But most the real tricks used in VR and opposite to realistic and game graphics are missing.

He mentions abstraction over detailism. Good. He mentions cheap battery powered HW, smartphones. Good. And then he mentions some totally unimportant artefacts, fooling our perception.

But he fails to explain the importance of latency and feedback over realism, and the difference in the resolution. The problems of too slow latency, the expected lawsuits, why VR didn't make it into the consumer world in the last 25 years. Technically we were at the very same level 20 years ago already. We just didn't have smartphones.

A VR sightfield is like in a Super Cinemascope movie, like 3:1. 600x200 is perfect, as long as the tracking feedback is fast enough. This is completely opposite to modern GPU hardware, gaming and rendering apps. All this is not needed. A hercules-like graphics card is good enough. High resolution is only needed in the center of field, for the corners light and color information is enough.


That redirected walking video was weird... The guy walked what looked like about 10 feet forwards in VR and went what looked like twice as far in the enormous room that was at least 4x too small to get him to walk in a circle to continue the straight line...


Related: in [1] there's a VR version of the hyperbolic plane where there's "more space" than the Euclidean space we live in. So the person in the demo walks a loop and a half in order to end up back where he started in hyperbolic space. That group has some talks filmed on youtube that explain it in more detail if you want to listen, but I can't find the right snippet.

[1] https://www.youtube.com/watch?v=T7p-VgAWlao


I've been wondering if inertial engines could be placed on either side of a VR helmet, to combat motion sickness. When your VR avatar would be moving forward, the inertial engines would both pull backwards, hopefully tricking your inner ear. This could be extended to make all motion feel more natural.


Can you further explain your “interial engine”? My understanding of physics says that it would only be able to actuate once then slowly “reset”, ruining the illusion if you’re moving in any steady state mannor


Gyroscopes are used in satellites to change orientation. It should be possible to use one on either side of the head to simulate basic balanced inertia.

https://en.wikipedia.org/wiki/Attitude_control


Ah. I was thinking linear inertia and you were describing rotational.


Of course the omnipresent Gwern makes an appearance in the comments.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: