The actual quote is excellent (and already on this page). He's right -- the list of things to be done is obvious, and will take time.
And, I'd like to imagine he has some enjoyment at the thought -- if you saw his keynote showing off a precomputed(ish) raytraced walkthrough of their new office space, he was really enjoying himself and the possibilities.
What Hsun has is an incredibly good window into the silicon side of the problem, and he knows how many chip cycles he's got to address the amount of computation needed for physics, visual realism and presumably a latency budget to allow wireless support. Plus he can estimate what screen tech will be doing.
It's a prodigious amount of R&D, and it is a bit breathtaking. Finally having a reason to do it, and a taste of how appealing it can be is pretty cool.
You mean how many chip cycles he needs, right? How many he's got we all know: maybe 2. Then CMOS scaling is over. (Not that things will completely stop evolving, but they'll surely slow down.)
well TSV have evolved greatly which means that 3D dies can come soon.
3D transistors already expanded the limit of how small parts can get before physics hits you in the face with a baseball bat.
On top of that memristors and graphene could open a whole new world for chip designers.
If we have only 2 cycles worth of process left we are doomed :)
to put that into perspective:
"An Australian team announced that they fabricated a single functional transistor out of 7 atoms that measured 4 nm in length."
Maybe we can make our way down to single atom transistors eventually, but who knows how long it will take to make that commercially viable.
In 20 years? I think he's got at least 4, maybe 5. Of course, I don't know how big those evolutions will be in terms of advances, but I'm just a silicon dilettante, nothing like an expert.
I think the dispute here is that there are two different standards in play. He has time for 4-5 cycles, but Intel only has concrete plans for ~2 cycles. After that the existing die structures can't be meaningfully reduced anymore, and everything is up in the air. Maybe we get some wild, big-but-fast optical chips, maybe we get a breakdown of Moore's law.
Of course, 3d chips look fairly achievable and might eke out a few more cycles, but things will get interesting sometime before that 20 year deadline.
I don't think Intel plans for more than 2 fabrication cycles regardless of some gloom and doom omg quantum physics go away plox prophecy.
14nm FinFet GPU's are only coming in now, and they'll stay on 14nm for at least 3-4 years.
10nm GPU's will probably come out in 2019-2020, 7nm GPU's will come out in 2025 or so, and 5nm GPU's will come out in 2030, that's already ~15 years and considering that everyone is predicting that we'll have to stick to processes longer as the small we get the more time it takes to actually optimize them for good yields and performance you probably have 3-5 years there too.
So even if we do not adopt any major process changes like switching to graphene, solid state photonics to replace the metal pathways and even optical transistors or w/e else people are cooking up to enable us to continue to improve our integrated circuits he can still be correct about the time frame.
Also regardless of what happens or not he was talking about true VR immersion this means that allot has to change other than just how good and convincing the graphics are this also includes learning to develop VR applications and as VR is a very new media that alone can take 20 years.
This is a good summary, thanks. And yeah, I only meant to speak to the argument in the comments over how many cycles we can "fit" in the next 20 years.
I think a lot of VR's biggest hurdles for that time aren't dependent on processing power. Focus and resolution issues will side-step some of it (seriously, our eyes are terrible outside of their central focus region!) because there's simply no need to render everything flawlessly as on a TV. Beyond that, there are going to be all kinds of design hurdles. Right now, we have nausea-inducing roller coaster rides and people running into walls because we can't sync up the (already impressive) visual experience with inner-ear confusion and physical boundaries. Just working out what the most compelling designs are for our current VR tech is the work of years, and that's without doing it on the treadmill of ever-improving tech!
The really big, really difficult one will be enabling dynamic focal depth. The lack of focal planes makes it difficult to imagine VR providing a lifelike experience.
Some volumetric/panoptic displays are getting closer, along with eye tracking, but this seems like one of those things you cannot fix in software. Just adding higher pixel resolution, improving "physics", or creating a more beautiful environment will not solve this. So, 20 years is probably right.
Nvidia and many others are working on this problem right now and making a lot of headway. They are called light field displays. One of the biggest players in the space is magic leap, they have attracted quite a bit of funding (1.4B last I checked).
It'll be interesting to see what Magic Leap finally produces. Maybe it'll be an embarrassing bust, maybe it'll jump the whole field ahead five or ten years. I don't think I've ever seen so much money go to such a (public) unknown before.
It does seem to be a company like something out of fiction: all the skeptics who go see what's behind the curtain come back swearing it's amazing, but refusing to talk about it! I'm fascinated to see what they actual produce, and I'd be sweating a bit if I worked in VR tech at any other company.
Yeah, lightfield displays exist today and will only get better/cheaper. Also, accommodation is only one of many vision cues amongst stereoscopy and parallax, and one I personally don't miss all that much. When everything else is perfect, sure, it needs to be tackled, but there are much bigger fish to fry at the moment.
But 'all' you have to do is track the pupil, and push focus on the object they're looking at. We're already rendering the focal length live in most games, there's no reason we couldn't shift the focal length affording to what the viewer's pupils are pointed at.
Alternatively, they could use a modified biology, where the direction the head is pointing chooses the relative DOF, basically you render a fairly wide depth of field around whatever the head is pointed at.
Anyway, you can get a pretty lifelike experience with just some basic atmospheric perspective. Look at photography with very wide depth of field [1], so everything is in focus simultaneously. You sorta just get used to it, and accept it.
Except you can't fake the feedback of muscles adjusting focus that the real world requires without different physical focal planes. You can mimic the focal effect, but your brain has the ability to recognize motor inputs to the eyes and act on this information (or lack thereof).
I think what we really need is a way to target specific focal distances via some sort of dynamic refraction. Otherwise we'll never get the proper motor feedback needed for immersion.
As someone who works on eye tracking for VR headsets, this is much easier said than done. Doing it well requires levels of accuracy, latency, and reliability that are not available in any eye tracker on the market today.
What about SMI already tracking pupils at 250Hz? https://www.youtube.com/watch?v=Qq09BTmjzRs That tech is still expensive and I don't know how accurate or reliable but if it is good enough for foveated rendering, I would have expected it to be good enough for focus as well. Care to elaborate as to why it wouldn't work, or is it just beyond your horizon of "today"?
It's not accurate or reliable enough (in addition to being very far from a consumer price point). Reliability in particular is difficult to assess and usually lacking in eye tracking systems. Also just because the cameras are running at 250 Hz doesn't mean the latency is 1/250 seconds or even close to that.
Only reason for lack of availability is size of the market. Tech is already here and you are probably holding it in your hand right now. Avago's will be more than happy to sell you tiny combo chip packing camera/DSP able to capture >10K frames/s, track at >5 meters/s and report position at over 1KHz.
As someone working in the industry, would you care to guess how many years away we are from being able to do this at an acceptable level? Is this something for the next gen of hardware or is it much further away?
Maybe VR would work better if you just fed the same view to both eyes and ignored stereoscopy. Jaron Lanier used to do that with his original VR rig in the 1980s when one of the machines went down. He said nobody noticed.
I can assure you that the tech has advanced enough in the last three decades that you would immediately notice the lack of non-parallax depth perception.
The problem is your eyes can focus at different focal planes: on an object, past an object etc. I'd imagine it would be pretty jarring if the software just blurred everything around an object that happened to be at the x, y coordinates where your pupils were pointing.
This doesn't help. You need to physically change focus in your eye. Your brain uses the feedback from the eye muscles to ascertain distance. If your eye muscles don't need to adjust focus, then they're reporting that everything is at the same distance. If this disagrees with the stereoscopy or the context then you will get nausea.
> "Your brain uses the feedback from the eye muscles to ascertain distance."
Source? I'm under the impression that focus is a brain-first, muscles-second process. If the object was already in focus, I don't think your eyes would mind.
If that were true people with only one eye would be nauseous all of the time.
As I understand it (which I may not) -- this paper (which purports to be the only paper to have tested this effect), says that they test divergence by having the images displayed at various distances to replicate varying levels of divergence.
They see that the closer it is to a realistic vergence/accommodation match the more comfortable the eyes.
However -- it should obviously be easier to tune a 3D image which is at a realistic distance than one which is not.
It seems to me, they may very well just be showing that effect, just showing that it is easier to adjust the illusion to render realistically at distance. This would make perfect sense.
Eventually it'll make more sense to just use the human brain as a renderer.
Perfect photorealism is fairly worthless without a solid method of interaction and the ability to literally suspend disbelief.
Direct neural interfaces will be capable of everything, not just rendering.
That said, I can't help but think of a scenario where we have functional neural interfaces but haven't yet mastered the human brain's grammar, and thus resort to "driving" the brain with traditionally-rendered images rather than feeding it the information required to create a scene from scratch.
In such a scenario, the requisite quality of the traditionally rendered images might be interesting. Would photorealism be required, or simply a low-quality rendering to convey general structure? Presumably the brain would enhance the latter case to the point of photorealism—or at least to the dream equivalent.
Something like this would be the key breakthrough. For all all their versatility the human eyes have not evolved to be (have not been designed for - depending on your viewpoint :) an optical interface for digital systems. Most of the hard work at the moment is in working around the "limitations" of our visual pipeline.
Rudimentary technology that does just this has been around for a while as assistive devices for blind people [0] albeit only rudimentary shapes and movement can be portrayed.
I believe ultimately the most effective technology will be that which integrates directly with the visual cortex, which I read somewhere has neurons arranged somewhat isomorphically with the visual field [1]
I think this should be sufficient to stimulate both higher cognitive and the more reflexive automatic visual systems [2] though IANAN and really haven't look at this area in depth for a number of years.
To get into the realms of pure science fiction about it [3], this could even be done by way of the pineal gland, which has some optical features that are largely unused. Perhaps it was put there like that for a reason ...
Such interfaces might be closer than you think. An interface that stimulated the processing layer that detects complete objects (e.g. bananas) was used in a research setting to provide vision to blind people:
It had bad resolution, required those people to learn how to see using it, and had biological problems (wires exiting the skull); all these do not seem to be fundamental problems, but ones that require incremental development.
I think transcranial stimulation could be an interesting possibility for exploration. Currently crude therapeutic devices if TMS could be more finely targeted you might be able to get around the "wiring" problems. I think as you mention humans learning to "see" might be a part of the solution too - meeting the machine half way - as it were
Sadly we are still very far away from understanding the human brain enough to be able to connect and communicate with it directly, so while that would be great, in the next 20 years it is more feasible that we'll just keep polishing the existing idea of screens in front of your eyes.
Agreed, considering the amount of processing and information loss that happens in the optic nerve (in people with healthy vision) you could save yourself a lot of processing, and get a higher fidelity result (potentially indistinguishable from reality).
I expect it would be some kind of a merge of the two (without severing the optic nerve that is) - neurons work by accumulation so you would effectively have the two realities summed - which would be neat for augmented reality. Two fully immerse you could close your eyes or put on dark glasses. A friend told me this is how visual hallucinations on LSD work ...
Why do you think the human brain would be useful for rendering? The brain processes input from the eyes, it doesn't render anything. The scene of reality is already rendered, we only need to look at it to experience it. VR requires building and rendering a whole new reality; the brain has evolved to experience an existing one.
>The brain processes input from the eyes, it doesn't render anything. The scene of reality is already rendered, we only need to look at it to experience it.
That approach is flawed. What if the imagery to "draw" something is not preexisting. If somebody had not seen alien, he could not envision such a scenario equally vivid.
At first I thought the idea was pretty ridiculous, but now that I've given it a bit more thought, I'm beginning to think having to carry around a backpack like this might not be such a bad compromise between the traditional stationary wired VR setup and the holy grail of fully wireless VR.
I think many have misinterpreted what he was saying, especially if they haven't read the actual article. Nvidia's CEO is saying that VR won't be a "completely solved problem" for at least another 20 years. And that's very much true and has already been known by VR enthusiasts.
Carmack was even saying stuff like you need a 12k or 16k display to have the "ideal" resolution in VR, years ago. Well, we're not going to get those kinds of displays in VR headsets anytime soon, and even if we do, that's orders of magnitude more performance that's needed compared to what we're using in today's PCs.
On top of that, you also need "virtual reality" to be as close as possible to actual reality in terms of graphics fidelity, so that's a few orders of magnitude more in terms of performance needed. If you want "Matrix-style VR", then yeah, that won't be achieved for at least another 20 years. The headsets will also have to become "glasses-like" to increase adoption and make them more convenient, which is going to take at least another decade, too.
All of that said, while I do think the 1440p resolution is way too low, and probably even 4k won't be the "optimal" resolution for VR, I do think VR will start becoming "mainstream" within the next 5 years. I think many took his comments to mean that VR won't be mainstream for another 20 years, but that's not what he was saying.
How many years from John Logie Baird's first invention did it take to get TV right. Mainstream? Colour? Picture & sound quality? Ditching the cumbersome CRT ... best part of a hundred years but people where ready to use it long before that and pay big wodges of cash for it too. I'd say there'll be a big market for sub optimal VR :)
I wander why Huang did not mention -- in my opinion -- the main issue with VR - the fact that many people (including myself) cannot use those headsets for a longer time than say 10 minutes without getting dizzy and sick.
Or are these the consequences of those issues he mentioned - like "the physical worlds do not behave according to the laws of physics"?
Have you tried experiences where you do all the moving yourself - like most vive games? I never had terrible VR sickness, but when I used the DK1 I could feel my body change temperature by itself fairly quickly and then would need to breathe carefully to stay ok. In Vive where I'm controlling the movement I have basically no noticeable sickness. I know that's not everyones experience, but if you haven't tried room-scale vr, I recommend it.
I can second that. I don't know of anyone[1] who's gotten VR sickness from a room scale Vive game. Only doing teleportation and 1:1 tracked motion has solved the nausea problem for essentially everyone.
[1] Including my partner who gets nauseous from even short car rides, to say nothing of boats, airplanes, or the DK1.
I am not an expert in VR, but I know one of the main causes of sickness from VR headsets is the latency between the user's head movement and the simulated camera movement. Basically, there is lag between when you move your head in real life to when the VR headset displays that new view angle to your eyes.
Reductions to this latency is an ongoing effort and companies like Nvidia are definitely in that space.
I don't think it really is. I've had the oculus for a couple of weeks and the only thing that's come close to making me sick has been a wreck while playing Project Cars. I'm fine even with minecrift and ethan carter without comfort controls. There are a few sorts of movements that make me a tiny bit queasy (like moving backwards and then forwards quickly), but they're pretty easy to avoid doing.
I've sailed a bit and seasickness is an unpredictable issue that affects many people, yet, people can and do eventually power thru it. And it comes back if you're on land for awhile. Some folks say it goes away permanently if you sail enough. I would imagine VR seasickness will have plenty of old wives tales handed down.
Socially/culturally yet another thing separating hard core gamers from the general population is probably not good. On the other hand, any future hard core VR gamer being able to jump on a real life boat and not puke is probably very good for the boating sports.
Another interesting cultural point is like many other people I don't get seasickness nausea even under pretty bad sea states until someone else pukes, then I start feeling queasy. I suspect this will be an issue for VR gamers at LAN parties. The first guy to puke at a LAN party who makes everyone else queasy is going to be a meme in 2030. I would imagine playing sound drops of people vomiting over teamspeak will be considered VR cheating, and likewise a soundboard app to generate vomiting sounds will be a cheap source of money.
Much as energy drinks are identified with hard core gamers, its not unlikely scopolamine sea sickness patches will be a thing in the gaming scene. Along with fake herbal preparations that don't work, people hiding the patch to pretend they're elite and seasickness proof, etc. And of course the patch has some interesting nasty side effects which will likely become part of gamer culture. I predict scopolamine patches will be sold decorated up like energy drinks.
Anyway if you're looking for some way to "tag along" with VR as a startup, look into anti-seasickness technology.
I still think interaction is the major shortcoming and probably will be for the foreseeable future. Most of this seems to be a technology-psychology hybrid. I want to grab a virtual coffee mug with sufficient feedback so I don't grab through it, feel the weight of it etc.
Another part is finding good interaction paradigms which I suspect will fall out of better technology being available and in more hands (easy to prototype interactions). Wide spread first gen VR-Headsets and primitive interaction means (controller) will probably be good enough to drive that.
Can't remember where I read this, but the idea is that the brain compresses a vast amount of data down to a small amount of meaningful information. VR's current approach is to meticulously spoof that vast datafeed in such a way that the result gets compressed down to something that seems meaningful. Which is a lot of work. Why not go in post-compression and merely spoof the meaningful bits? VR's job could be a lot easier.
I don't know how it would play out, but speculating, you might render a low-poly monochromatic "skeleton" scene and then simply suggest to the brain ways to flesh it out with feeling and texture. Show the user photos of war-torn Europe in 1947 beforehand for example. Even if it means direct neural interfaces, considering that we're 20 years out using the current approach, maybe that isn't so far fetched.
One way to do what you're suggesting is foveated rendering. If you have very fast and accurate and robust eye tracking, you can avoid rendering most of the pixels that lie in your peripheral vision (where your eyes have very low resolution). Future VR headsets will almost certainly need this.
> Even if it means direct neural interfaces, considering that we're 20 years out using the current approach, maybe that isn't so far fetched.
To be economically viable, the tech needs to be sold to be mass-sold directly to consumers. So it needs to be mostly safe, even when it has bugs, and usable by anyone, without months of "training your interface device and brain".
Even if there was some progress with "direct neural interfaces", without a breakthrough in nanotechnology I'd be we're probably more than 20 y away from something usable outside of a hospital or a military lab that has a different cost/risk/benefit equation from the "average joe" consumer...
And you probably don't realize it, but to get that "spoofing" part right, you might have MUCH HARDER problems to solve than having a ultra-high-re ultra-low-power ultra-low-latency mobile VR set...
This is the premise of a 90's sci-fi novel about VR (https://en.wikipedia.org/wiki/Labyrinth_of_Reflections). In the novel, a character discovers a graphical sequence that puts people into a hypnotic state in which they can be shown a low-res virtual environment which their brain then fleshes out.
Because ultimately, things are compressed into a vivid narrative, or compressed even farther into a lesson learned, and we've had the technology to do that for millennia. The purpose of VR is to suck away money from investors by teasing them with a gravy train of young men sitting around in headsets. Like 3D TV, except less likely. Stories and lessons won't work for that.
I always wondered if you could use adversarial neural networks to find images that the brain recognizes as something else, just like you can fool NNs. You could use this to find low-poly representations of your objects that are indistinguishable from the real ones. Probably you'd have to calibrate this for every user though.
I suspect that both VR immersion and self-driving cars will always be "just around the corner." We want to believe in them, but in both cases there are obstacles that may be insurmountable or will at least force us to scale back our hopes. In the former (VR) they are technical/biological and in the latter (self-driving cars) they are technical/environmental.
Or visual systems are too tightly integrated with our motion systems and other sensory systems to be handled independently, just as traffic lanes are too exposed to other influences (unpredictable people and things) to allow vehicular autonomy.
In both cases we want to see the thing we want to do by itself but there are deep interdependencies.
The difference is that your brain fills in the gaps when there are very minor discrepancies between sensory systems.
That's why presence exists with current VR platforms.
Not interested in Mobile unless it is for autos? That is a pretty big declaration from one of (if not the) worlds biggest GPU manufacturers. Does this spell the end of the Shield or is that not considered mobile?
It seems apparent to me that the strengths of desktop chip manufacturers don't translate well to mobile. Both Intel and Nvidia have had tremendous problems to compete against ARM / PowerVR. While Nvidia has done a better job, I imagine they are still loosing money on that front. Maybe this is the time for them to acknowledge that and instead concentrate on markets where local data parallel computations are actually useful and currently feasible (due to power requirements). Maybe mobile might get there as well (real time on-chip image recognition), but it's still years off I think.
Well at least they have some performance benchmarks to show for [1] (I wasn't able to find a comprehensive sustained performance-per-watt comparison though, which would be much more interesting) and at least they sold more than half a billion dollars worth of Tegras in 2015. Looking at [2] that's at least substantially more than Intel sold when their mobile business crashed and burned in 2014 [3]. It's however not as much as Intel sold in 2013 and the last quarter of 2015 wasn't good for Nvidia either. Maybe it's fair to say that Nvidia is just learning the same lesson with a bit of delay, but at least they have a very strong pivot going after the car market with rapidly increasing compute requirements.
In recent years, Google Pixel C, Nexus 9, Nexus 7 (2012), that Project Tango tablet, and Nvidia's own Shield devices (TV, tablets, handheld). See: https://en.wikipedia.org/wiki/Tegra
Not the best-selling lineup, but a fair amount of developer mind-share thanks to Google's penchant for Nvidia SoCs and Nvidia's own outreach.
Delivering good chips for mobile use? Yes. Tegra chips have been showing good results in benchmarks.
But making money out of them? Not so much. The whole mobile chip industry became a race to the bottom (with prices) many years ago and most companies in the business are struggling as Samsung and Apple make their own chips and the rest of the market isn't worth a lot of money.
They had some wins with the Tegra line... Tegra 3 was pretty big in tablets and phones a few years back. Not sure how well it's going currently. But much better than mobile Atoms for sure.
Daydream and Gear VR cannot do spatial VR (where your head moves left a few inches) which severely limits the immersiveness you can achieve. It's like watching a 360 video on the Vive or Rift. It's not a good feeling. You can look around but it's completely obvious that your head is not modeled in the world.
Unless they invent/integrate tech like Project Tango into most phones and make it work for spatial location this is not a problem that can really be solved on mobile.
In 20 years we might be able to implement something that works with our visualization systems to produce images at will...at least for those of you without Aphantasia.
It shouldn't need to be /too/ huge with smart redirected walking algorithms. But still probably too big for the average person to have available to themselves privately.
This is where I feel most people in the VR space agree that "VR arcades" will become a lucrative industry.
30 years. At least if you're talking about mapping everything at the same scale, graphic fidelity, resolution, price, and field of view as in VR, then AR will always be 10 years away compared to VR, in my opinion. If you're going to make it much more limited than VR (like say only show a chess table in front of you, or some text on top of stuff, but not much else), and make it cost several times more, like the Hololens does right now, then yeah, I guess you could make AR seem almost as usable as VR, but it will never be a 1:1 comparison this way.
If you're doing 1:1 comparisons, then it should be about 10 years behind VR because you have to transpose the same world that you do in VR but over the real world.
“First of all, VR displays are a little too cumbersome. It has to be much more elegant, being connected by a wire has to be solved. The resolution has to be a lot higher. The physical worlds do not behave according to the laws of physics. The environment you’re in isn’t beautiful enough. We’re going to be solving this problem for the next 20 years.”
"Solving over 20 years" / "20 years from being solved" are very different things.
Some of this may be misdirection. VR is getting tremendous attention and will be very competitive. Look how he backed away from mobile. Complaining about wires and resolution? They're getting new revs every 12-18 months now. Look for hidef wireless hmds in 20-36 months, not 20 years. No physics? Look again. I think he wants to avoid competitive low margin businesses when better choices are available.
>Look for hidef wireless hmds in 20-36 months, not 20 years.
How do you suppose that will happen? On-device computing hardware or a wireless tether?
The latter appears to only be maybe possible with microwave radio, but requires constant line of sight which is a huge issue.
The former may be possible depending on your definition of hidef. IMO I doubt it though, much of what makes VR immersive is having a HMD lightweight/comfortable enough that you forget it's on. That's hard enough when the only hardware in the HMD is essentially screens and an IMU. I can't imagine the added weight of CPU/GPU/batteries/etc not making a significant enough impact on comfort to make it not worth it, in the next 20-36 months.
For reference, my standard of hidef VR right now, not in 20-36 months, is Vive level resolution/FOV/tracking accuracy and latency; I wouldn't put GearVR in that group, or any platform without submillimeter positional tracking for that matter.
Ha, 20 years. I'd guess more around 100, and graphics card companies, unless they pivot, won't really be a part of it. If we're talking about true VR, it's the replacement of reality, e.g. piping experience directly into your neurons. For instance, how are we going to augment smell or touch with a headset and a video card?
Sensory augmentation will probably start with dream augmenting tech (headsets that are able to give you "good" dreams by learning your neural firing patterns), then as delivery systems, nanotech, processing and brain knowledge gets better, we'll see it evolve into waking experience augmentation.
OK, yes, that is the ultimate vr destination, but like bradhe said, thats not what anyone's talking about. Like you said, that's more like 100 years off (I think its a lot more before the tech is really all there AND people are OK with the seriously invasive surgury that will be required, splicing the optic nerves and such.
But its more reasonable to think well at least get a near-perfect audiovisual virtual reality headset in 20 years. If the numbers you're throwing around here are accurate, then there will be an 80 year stagnation in vr immersiveness, when the headsets are great but anything motion related feels fake, but computer/neural interfaces still limited to prosthetics, or at least out of reach for consumers and not powerful enough for vr.
80 years is a lot longer than TVs have been around, there's plenty of time for graphics card companies to rule the world.
And then you have the fundamental design problems of VR, where it blocks out your environmental awareness, causing people to crash into walls, leading to liability issues: https://imgur.com/umYTJP1
The first generation VR also faced the same liability issues. Specifically, the Atari Jaguar VR was cancelled because of this.
I own a Vive and have not once crashed into a wall. The software knows where your physical boundaries are and warns you when you're too close by fading in a virtual wall (sort of looks like a blue mesh).
> Also, how do you make something like Call of Duty when you're limited in your movement by walls?
You don't. Many existing game genres don't map well to VR in a direct port of existing mechanics.
From a game developer's perspective that is one thing that makes it exciting. It's a blue ocean of new design possibilities. I'm an optimistic person but I believe we'll find new designs that are equally compelling to the most popular games on 2d screens. Different games for different platforms.
I'm sorry, but you don't know what you are talking about. I recorded the video/gif you referenced in your first post. While my one friend did slam into a wall, she only did so because she ran. If you walk the system will stop you from crossing the play space boundaries (pass-through camera and virtual walls), and in the many, many demos I've given in the space you see up there in the video, not a single person has had any problems.
And in case you hadn't noticed, VR is already a consumer product.
> VR will never make it as a consumer product.
>
> Ever.
Which it seems you are not qualified to give expert advice on. You might be right but you might get less downvotes if you adjust your tone a bit; Posting a single video and the jumping right to this conclusion also comes off as a bit lazy or something :-)
VR is already a consumer product. It is however very niche, and personally I expect it to remain so for a while yet.
I don't think liability is going to be a big problem though. People were destroying TVs with Wiis the whole time when it came out, but the Wii was still a huge consumer success.
I would say the bigger problems are to do with the fact that for room scale VR you need a dedicated space which lots of people simply don't have the room for. The aspect of disconnection from your environment is a problem, but I believe it's mainly how it affects social interaction that is problematic rather than issues to do with colliding with things. The chaperone system in the Vive is pretty good, and if you're playing a seated game in the Rift, you're not crashing into things anyway.
Personally I think that the most sensible arrangement for VR is coworking spaces with large fully tracked VR areas for groups and only enthusiasts would have them at home, but my understanding is that Vive and Rift are selling pretty well, so it's quite possible that I'm wrong on that.
I used first-gen VR. By this I mean I programmed using BodyElectric on Mac and SGI Onyx systems in the early 1990s.
There was a ton of innovation. But frame rates, even on half-million dollar hardware, had to be in the 30hz range. Knowing what we know now, there was no way for that technology to take off, even for those with very strong stomachs.
The minimal technology package to do a good job has only very recently been available for almost any money. I think it's too early to call it as a 'never'.
The critique you are responding to has nothing to do with computing power, or really any sort of linear extrapolation from where we are, yet you frame it as a question of frame rates and "minimal technology package"? I'm not sure I agree with the OP, but you're surely missing their point.
I used those early systems, definitely ones better than the Jaguar at least, and in the same year it was released. And, I'm saying anyone who thinks the Jaguar's failure means modern VR is doomed is missing some super basic things about the threshold at which VR is compelling.
And, I'm saying we are just over that threshold, sometimes, with some experiences, and that there was never a snowball's chance in hell that any kind of compelling VR experience could have been had in the early '90s.
If the Jaguar had been compelling, the liability issues would have been sorted, full stop.
Anecdotally, I've found even my 70+ year old father-in-law responds well to Chaperone bounds in the Vive, he 'gets' them immediately. A friend who is partially disabled is also successful at using the Vive, despite some mobility issues. None of that (or the attendant safety) matters, though, if the tech isn't compelling.
The possible failure of active VR (controlling something by walking around) due to space issues doesn't rule out VR as an immersive display technology. You can have one without the other.
I do agree that there are going to be liability issues with room-scale vr. I think someone's child or pet is going to be severely injured, because chaperone doesn't account for random objects walking into the play space. I don't think it's going to kill VR as a consumer product, though. The technology is so amazing, it's going to succeed despite those kinds of problems.
You're talking about room-scale VR, which I think you have confused with VR in general. Environmental awareness doesn't matter if you're sitting down and holding a controller.
And, I'd like to imagine he has some enjoyment at the thought -- if you saw his keynote showing off a precomputed(ish) raytraced walkthrough of their new office space, he was really enjoying himself and the possibilities.
What Hsun has is an incredibly good window into the silicon side of the problem, and he knows how many chip cycles he's got to address the amount of computation needed for physics, visual realism and presumably a latency budget to allow wireless support. Plus he can estimate what screen tech will be doing.
It's a prodigious amount of R&D, and it is a bit breathtaking. Finally having a reason to do it, and a taste of how appealing it can be is pretty cool.