Hacker News new | past | comments | ask | show | jobs | submit login
Oculus Research to Present Focal Surface Display Discovery at SIGGRAPH (oculus.com)
212 points by srinathrajaram on May 17, 2017 | hide | past | favorite | 101 comments



VR sickness is primarily caused by latency. ie you move your head, the image takes a few milliseconds to respond and you feel dizzy. But there are other types of VR sickness, like the inability to focus on an object. This research improves your ability to focus on objects at different depths. Your vision is less blurry. So yes, this research does help eliminate nausea in VR. To say otherwise is misleading.


Primarily?

I'd say it's primarily caused by a disconnect between what the human vestibular system (your sense of balance and spatial orientation) is telling the brain your body is doing vs the acceleration forces that your body is feeling vs what your eyes are telling you about what the body is doing.

Or in other words - differences in movement/locomotion your eyes are seeing in HMD vs what your other senses are experiencing.

Which probably would include latency as a subset.


The term in the medical literature you are looking for is 'Simulator Sickness'. There are 2 main theories on it's occurrence, one of which is the issues with the vestibular system. Wikipedia has a good introduction here: https://en.wikipedia.org/wiki/Simulator_sickness

To note, simulator sickness is not that new, we have been grappling with this for ~65 years since the first flight simulators. Despite the massive funding that the DoD has at it's fingertips, we have not found a cure for it OR there has not been a lot of work done to find a cure. However, it seems that more time in the simulator does help, thought it may then hinder actual flight performance. Also, the more experienced pilots had a higher occurrence of it. Again, it's not that well understood.


Mayo Clinic has some recent work on hooking electrodes up to your vestibular system: http://newsnetwork.mayoclinic.org/discussion/mayo-clinic-and...


Wow! Now that is some pretty cool stuff! Besides the VR issues, this has some really good applications to things like vertigo and Meniere's disease (unfortunately one of the 'suicide diseases'). Maybe this really will help people!

https://en.wikipedia.org/wiki/M%C3%A9ni%C3%A8re%27s_disease#...


I'm slightly skeptical at the moment because there was a lot of buzz at the time of the press release, and then it's total silence for more than a year now. I tried to find any sort of hands on commentary to confirm that it actually works and exists, but no luck.

Still, if it were a total scam I wouldn't think the Mayo Clinic would put their name on it.


Sometime in either the late 1990s or early-2000s (don't recall) there was a company that came out with a vestibular stimulator explicity for VR applications. They apparently sold units for game developers to use, along with a Windows API (hooked into Direct3D or DirectX - or something like that). It wasn't cheap - and ultimately they came into the market at the wrong time (dotcom crunch, and people didn't want VR stuff at the time anyway - the first VR winter was in full swing at that point).


The same phenomenon (visual data conflicting with kinesthetic senses) causes vertigo; a common malady suffered by pilots flying through cloud banks.

It's a major hurdle to overcome.


>I'd say it's primarily caused by a disconnect between what the human vestibular system (your sense of balance and spatial orientation) is telling the brain your body is doing vs the acceleration forces that your body is feeling vs what your eyes are telling you about what the body is doing.

The vestibular system has a prodigious amount of control over your life. I've got hearing damage and occasionally get vertigo spells (I carry dramamine in my wallet for emergencies). Every once in a while my balance sense just begins to swing around wildly. Sounds kind of cute. It will put you on your ass for 24-48 hours with zero possibility of doing anything. Even drinking water is hard. Without drugs the nausea just becomes your entire world. You think about nothing but how much it sucks. You can't eat, move, sleep or speak.

So... yeah. Motion sickness is a very real thing. I think (based on zero evidence) that everybody has a different threshold but experiences the same effects past that. If you're lucky, your threshold is faster than you can move your head. If you aren't, chug dramamine and coffee and hope they invent something that turns off your sense of balance without making you delirious.


not sure what your pathology is, but you might want to step back from coffee and caffeine in general as it might be a factor in some cases for triggering attacks (high sodium foods, chocolate and alcohol can be others)


I believe it was someone on the ocolus team (carmack maybe) who mentioned what I call the horizon test: does the horizon (appear to) remain level when you tilt your head doelwn to your shoulder?

It's a pretty demanding latency test, and I can also see how it ties into balance etc (there are some similarities with a tilting boat deck, for example).

That said, I think simulator sickness and sickness from just playing on a big screen is complicated - maybe some will never enjoy it.

Personally I'm rather sensitive to motion sickness from most fps games - but I enjoy playing elite:dangerous with a vr headset.



Most software for modern roomscale systems (i.e. Oculus and Vive) takes that into account - movement is mapped 1 to 1 with a person's movement. Travel happens through teleportation, which doesn't cause motion sickness.

Rollercoasters and smooth/fps style movement is rarely used - and even if it is, the interface is designed tomminimise the effect you mentioned - e.g. user is put into a virtual cockpit, which makes the brain feel it's the terrain that's moving and not the person.


Sure, but it's a workaround because our technology is not there yet.


not only the vestibular system is involved in your proprioception, there are other parts of the body that do, for example the neck is definitely involved (which can be a factor in whiplash injuries, or cervical vertigo), even if the vestibular system was solved it might not be enough.


yes to me it's related to seasickness, just the opposite : there is a conflict between what you see and what your inner ear senses


> So yes, this research does help eliminate nausea in VR. To say otherwise is misleading.

Are you responding to someone? Because this doesn't really make sense to me. Can you clarify?


I agree completely. My personal VR sickness isn't like vertigo or sea sickness, which both afflict me in real life acutely.

Maybe it's a due to a lifetime of playing video games (3d since i was 10), but even VR games with weird detached cameras or games not designed for VR which don't really respond to head movement are totally fine for me. There is a cognitive adjustment I make which is manageable and does not make me feel sick.

Having used gearVR with a scratched phone screen for a year and now real Oculus for a few months, I am fairly certain that my particular sickness comes from the strain of my eyes constantly focusing too close or trying to mentally blur obvious pixel artifacts or struggling to focus on fogged up parts of the lens which is causing me to experience a headache which eventually leads to nausea.


There's a big difference between "will help reduce nausea" and "will eliminate nausea".


I had VR sickness bad, but it is the same sickness I felt playing portal on my PC the first time, and watching cloverfield. I definitely think it is more for me around the disconnect between visual and body motion.


There is nice table in the paper which compares the capabilities of the different technologies trying to solve the DOF problem in HDMs.

http://i.imgur.com/8rdoeS3.png


Having a display that can support ocular accommodation (selective focus by the eye) is an important research development, though it will most likely not change the viewer's experience in a radical way.

Practical electronic 3D displays requires bandwidth reduction, both data bandwidth for transmission and optical bandwidth to create practical or lower-cost optical modulators. The goal is to use bandwidth reduction techniques that produce little or no visual artifacts. Some of the techniques used are the same as in 2D (spatial discretization, time multiplexing, compression), while others are unique to 3D (view discretization, limits on view angle, elimination of coherence).

Head-mounted displays are basically descendants of stereoscopes, the first 3D displays developed by Wheatstone in 1838. Wheatstone's amazing discovery was that you can throw a huge amount of information about the world away, provide just two images from two viewpoints, project them out to infinity in front of a viewer's two eyes using two lenses/light paths, and a vivid sense of 3D is evoked. That's an incredible amount of information reduction from real life.

In the traditional stereoscope, accommodation is thrown away, mostly because its really hard to recreate electro-mechanically, but also because we're generally fine with it. Accommodation isn't effective for distant objects (or for even larger depth ranges as we get older we lose our ability to accommodate), so we likely have neural circuitry to discount imperfect accommodation cues. One of the reasons we turn on bright lights when doing detailed work is to stop our eyes down and increase our depth of field, reducing the need for accommodation.

However, there have been perennial debates about the physiological impact of conflicting depth cues involving accommodation, and those debates are more interesting in VR where objects can be (virtually) very close to the viewer and the viewer can dynamically change their physical relationship with virtual objects.

Until you have a light modulator that can let you experiment with selectively modulating accommodation within a scene, you can't provide real data on how important accommodation (even approximate accommodation) is for a particular application. Can't wait to see the studies.

We did some similar focal plane manipulation in holographic video more than a decade ago, for related reasons (see Fig 7):

https://www.researchgate.net/publication/255603167_Reconfigu...


I would argue it's an important but not the most important thing for VR/AR given what I've seen from consumer feedback. The most important based on real world talking with consumers is a tossup between FOV and latency.


This is a much better (IMO)approach by Nvidia: http://www.fudzilla.com/news/graphics/39762-nvidia-shows-off...


If I'm understanding things, that only has two focal planes. They mention the approach of just having several focal planes in the video.


Yes that seems so. Must be for cost reasons, I initially thought the new system was like the one they demoed in 2013 (http://lightfield-forum.com/light-field-camera-prototypes/nv...), much more like the Lytro light field camera in approatch. So yes as already said, same idea as Oculus.


They time-multiplex the two planes (switching the displays on and off quickly). Amazingly, the brain interprets it as a 3rd, or 4th, or 5th, plane!

This Facebook optic uses the same technique.


No that's not accurate. The brain may not be able to distinguish the two focal planes but it's just wrong to say "the brain interprets it as a 3rd, or 4th or 5th plane".

And this Facebook technique clearly doesn't use the same technique. Instead of a focal plane they have a focal surface that isn't restricted to a plane. It might not be noticeably better (who knows) but it's definitely not the same technique.


Would it be possible to detect the focal distance of the eye and change the entire focal depth of the display to keep it always in focus, similar to automated vision testing devices? It could then perform blurring of out of focus objects as a rendering step.


That wouldn't account for the fact that the lenses in your eyes are physically trying to focus on a different plane because of other cues in the scene. If you look at a close by object in VR (like say the gun you're holding) your eyes will automatically try to accommodate to render nearby objects sharp. By doing so they will actually make the gun blurrier, because it (like everything else) is rendered on a medium-distance plane, while your eyes are cued in to focus on a nearby object.

Or did I get that completely wrong?


That's why you'd need to have the rendering engine blur things that should not be in focus. From what I understand on how the medical systems work, you'd shine an IR light into the user's eye. The return from the light would allow you to measure the focal distance of the eye. You'd then adjust the lenses of the headset so that the display is exactly that focal distance away. The focal distance of the eyes would get passed to the renderer. The renderer would do proper focal distance bluring to objects that are not at the correct focal distance. The main limitations would likely be the measuring of eye focal distance in realtime, having lenses respond to it in real time, and the possibility that the always in focus screen door effect would cause issues.


Oh, and accommodation really helps your eyes speed up focusing. Try focusing on things near and far with both eyes, and then only one eye. You can still do it with just one eye, but it goes much faster with 2.


I think you missed the part of the parent comment where it says "change the entire focal depth of the display to keep it always in focus". What you have described is how VR headsets currently work, and the entire problem the parent comment's proposed system addresses.


I'm not explaining very well then. Current VR headsets are only in focus at a single focal length. I'm describing a display that would magically be in focus at every possible focal length. The lenses would respond to your eye's current focal depth to keep the display always in focus, no matter how near or far your eyes are attempting to focus.


Crossed wires? I was replying to noio, not you. Your first post was quite clear enough. For what it's worth, I think it's quite a good idea.

Since my entire contribution to this thread seems to be to gently correct people who did not read the comment they're replying to carefully enough (a regrettably frequent occurrence I'm sure I'm guilty of myself from time to time), I would like to take this opportunity to exhort the HN community to please make it a habit to reread and attempt to a understand a comment before replying. It's a luxury we don't have in spoken conversation, so let's make it count here.

Also, by the way, you can edit posts; it's better to do this than clutter up the thread by replying to yourself.


I think this might also help a bit with edges of the display not being in focus assuming you also had eye tracking. You could always be sure that the center of your gaze was in focus. You'd still have issues with the edges of vision being out of focus, asymmetrically at that.


I'm a bit skeptical of how much of a problem this really is. I have never noticed it in VR. Perhaps because:

1. Display resolution is still quite low, so really everything is blurry.

2. You will never be able to notice blurriness where you aren't focused anyway because you aren't looking there! Everything is always blurry in your peripheral vision.

3. Surely eye focus is a feedback system, like in cameras? I mean nobody has problems focusing on TVs because your eyes just magically change focal length until the image is sharp.

I am stereoblind so maybe it is a big problem for others.


Have you tried to focus on something that is under 1 foot from your eyes in VR? From my experience it is virtually impossible, and I think the cause is related to this problem (not 100% sure though).


I think for very close objects you need to have a very good match between your interpupillary distance, the interaxial distance of the HMD lenses and the interaxial distance of the virtual cameras.

Also it depends on the HMD screen size, at some point parts of the object's image for each eye falls off screen.


Try closing one eye to rule out bad alignment of the viewing frustrums to your eyes' actual viewing positions and angles.


I might be an outlier, in that I don't experience motion sickness / vestibular mismatch from VR, but I suffer from this focal issue pretty severely.

When I'm in VR, I notice it slightly, but it's easy to get used to. However, as soon as I get out, I find it very difficult to focus normally again, usually for about 4x the amount of time I was in. This can make it difficult to do anything that benefits from depth perception, such as driving, walking on uneven terrain, etc.

If this technique can improve that side effect, it would make quite a difference for me personally. No one else I've talked to about VR have this same experience, so it might only affect a subset of the population.


There's an article about this here: https://developer.oculus.com/design/latest/concepts/bp_app_i...

I was reading it specifically in the context of problems with displaying text in VR.

Paraphrasing: in real life, your body uses two inputs to intuit distance: your eyes' focal distance and your eyes' relative angle. Focal distance is fixed inside the headset, so your body loses one of its depth cues. I suspect that contributes to the overall sense that seeing things in VR can get tiring unusually quickly.


Rather than nausea, I think I've noticed inability to focus on items away from the centre of vision (haven't used VR equipment in about 6 months, might be remembering things wrong), that's what this sounds more like it will address.


Why are holograms / light field displays not technically possible now? I would assume think we have bright and dense enough displays, and can shape the microlenses.


Because n^4 is nasty.

Full light field displays have to draw every ray, not just every location; that's two additional dimensions. Resolutions scale badly, and if you make any of the four dimensions lower resolution, you'll get ugly artifacts.

Microlenses aren't the limitation. Pixel density is, both in physical manufacturing side and when refresh / drawing.


>> Because n^4 is nasty.

I keep thinking there will be a significant reduction in complexity there. The intensity of a pixel in a hologram is essentially an integral over all surfaces visible from that point. So imagine a rather complex formula applied for each surface for each pixel. Then imagine holographic bounding boxes that compress complex geometry into a few holograms of what's inside. This would reduce the n^4 back down, but the resolution required for holograms is still very very high. But we could use fancy GPUs to evaluate the integrals.

Just hand-waving thinking here...


You can compress most Earth light fields really well. That might be what you're intuiting. There's a ton of redundancy due to the mostly opaque nature of reality.

But compressing a light field and projecting one into an eye are totally different things. Your display needs to be capable of displaying all 3n^4 possible intensities at different times. Depending on the scene and where you're looking, you can get away with showing only a smaller handful of those (some megapixels) but the display still needs to be capable of displaying them all.

If your display is just fixed LEDs behind a microlens array then you still need n^4 resolution. Like, a megapixel per pixel. Most of the pixels can be off at any given time, but you'll need them all to display arbitrary scenes.


It's actually worse than that, for a near-eye display: unless you know where the person is focusing, you actually don't know where the redundancy is, so you have to draw the entire 4D raster. (If you know their focus distance, you can probably just draw a 2D image on the retina and be done -- you get that 4D->2D projection.)

Otherwise, the existence of those additional rays is what allows for focus accommodation: as you focus in different planes, it 'shuffles' the light around, to create sharp edges (where similar light rays line up) or blurry foreground/background (similar light rays strike different portions of the retina -- and if any are missing, there is a 'hole' in the blur disk).

Pedantic, perhaps... but these are the stakes.


This is one of the reasons why I'm actually quite bullish on handheld VR/AR. It has none of the optical challenges of a headset. You lose one hand, and give up some immersion, but in exchange you get all of the other benefits of VR/AR without any of the optical challenges and lots more performance headroom.

I think Google is on the right path with Tango-first. Didn't used to think so.


I propose the display is a proper hologram, and hence just a really high resolution coherent display. No, it doesn't exist yet that I'm aware of, but it'd be really high resolution 2D.


Since the eye is circularly symmetrical, isn't it just one additional dimension, you can call it distance or spread angle?

Items in infinity emit parallel rays, items very close to the eye emit rays with a large spread...


The image on the retina is not symmetrical, though. Try to exploit any apparent symmetries and you'll end up smearing/distorting/destroying the image you're trying to convey.


I guess I didn't explain it very well, or then the idea doesn't have merit, but I still think it's only n^3.


Four dimensions to specify a ray: two dimensions for direction, and two dimensions for where it passes through the surface (whether the surface is the display, or the viewer's eye). You could specify a ray with five dimensions (three spatial, two angular), but that's actually overdetermined; any ray with the same direction along the same line is identical, so it's a four-dimensional quantity. (Source: I do this for a living. But google 'plenoptic function' for more technical explanation.)

Those four dimensions are projected into two on the retina, but the exact projection is a function of where the person is focusing. (That's how we understand focus; by changing the lens parameters, we can focus at different depths in the scene.)


Yes, but it doesn't matter if the ray is going up or down or left or right, as long as the angle from the normal is the same. This is because the lens in the eye is circular, and the image that results on the retina is the same.

So three dimensions: two for position and one for "amount of off-axisness".


Mostly because of bandwidth and physics. You have to get information into the display in a form that the display can convert to spatial/angular/wavefront information efficiently and with enough fidelity to be believable.

Any data compression method has to be compatible with the underlying display technology so you can bring the signal straight into the modulator. In addition, there's a wire problem: you can only bring so much signal into a light modulator because the signals interface through a surface, not a volume.

For these reasons and others, true holographic displays are simply not competitive at this time with stereoscope-like VR systems, which rely on tried-and-true, market-driven incoherent-light raster display technology.

People have been asking these kinds of "why not" questions about 3D and building various types of 3D display technology in garages and labs for almost two hundred years. With the exception of the discovery of optical holography, it's surprising how much is fundamentally unchanged.


The tech is available, but it's very expensive. Oculus has said that this technology is really a stop-gap that vastly improves focal points for the everyday consumer without breaking the bank. I'm betting this tech will get good mileage over the next 5-15 years as hardware catches up enough to render it unnecessary.



Ctrl+F "inner ear" - no results. To me, that's going to be the keystone to a functional VR experience. Until there's some relatively non-intrusive mechanism to fool the body's systems into playing along, I'm sorry, I don't think image resolution or refresh or FPS will solve the problem. They're all very important, sure, but I think the biology of the conundrum is the most challenging short term.


Shouldn't this be qualified by adding "beyond room-scale?"

Because when VR/IRL movement are the same, there's no need to fool the body's systems.


How do astronauts combat nausea in space? Practice. I personally don't expect a technical solution to this any time soon. We just have to acclimatize ourselves.


Not everyone is fit to be an astronaut. Not everyone is able to eliminate nausea through reasonable amounts of acclimatization.

The target audience for VR is everyone.


I know many people who can't play first-person games because of simulator sickness. That didn't stop the game industry.


To add to this, motion sickness from FPS-on-a-screen is inherently unfixable due to the lack-of-locomotion issue. But roomscale VR deals with that issue. VR done right could very well expand the FPS market.


Everyone? It's always going to be a niche thing. There's lots of people who can't play regular video games because of motion sickness, but the video game industry doesn't suffer.


you should replace 'always' with 'foreseeable future'. its plain wrong otherwise.


> We just have to acclimatize ourselves

A lot of discomfort for relatively little (at least thus far) gain.


> We just have to acclimatize ourselves.

Aww. But I already had my scalpels out for DIY inner ear surgery. I just finished watching the YouTube howto and everything... :(


There are two solutions to this that don't involve any new tech:

1) Don't move your POV in VR.

2) Train your brain interpret movement in VR as world movement not body movement. If you can maintain a sense that you're not actually moving just because your eyes are showing a lot of stuff moving around you, then your vestibular system and vision system will be in accordance and you won't have any motion sickness.

Children who use VR from a young age will have an easier time of this, but adults can learn it too.


That's only really a problem if you're strapped to a driver's/pilot's seat or some other system where the game isn't mimicking your body's movement. The current solution for that is one of those 6 DoF motion simulators, which might have to be good enough.

People with really bad cases of motion sickness have problems playing games on a flat screen let alone in VR. I don't think we'll be able to do anything for them in the near future.


I think this is overstated (though without looking through the prototype, this is only speculation).

During normal, outside-of-headset vision, we focus naturally and quickly on whatever we're looking at. We don't spend time with our eyes consciously defocused on subject matter in our foveal view. So anything that's out of focus will tend to be in our peripheral view.

So this is a peripheral technology. I think everyone's still looking for the killer additional tech that will make VR perfect -- but it's not about one magic tech bullet, it's about ecosystems slowly growing, and content getting better. (The headsets are better than people think.)


"It may even let people who wear corrective lenses comfortably use VR without their glasses."

If just for this, it's a move in the right direction.


Yes! Many people cannot use the VR Headsets unless they wear contacts, and even then it feels quite unnatural.


Wont this also need gaze-tracking to be successful? In their video they described a manually moved camera.

Is this technology compatible with foveal rendering?


No, I think this technique presents differently-focused areas to the eye all at once (sort of like natural light, just at extremely coarse resolution) and eye tracking is not needed.


Sort of strange to describe it as a "discovery" - I'm sure a team of engineers with a variety of fields of expertise spent 1-3 years solving problems that led up to this. A "discovery" would seem to describe something that existed in the aether prior to their work - it seems to diminish the innovation and effort they put in.


Weird that I haven't noticed this yet when using VR. I wonder if I'll notice next time after watching this.


i guess whatever the faults of the paper, i like that they do have a product on the market and are demoing and publishing research. magic leap? not so much.


Honestly I found the paper critically lacking where they attempted to make reference or comparisons to virtual retinal displays. Saying that a VRD is functionally restricted to moderate FOV in comparison to the 120 degree FOV of the Rift - using only the embodiment of the deform-able membrane mirror as reference is ridiculous on it's face.

Even a rough version of the deformable mirror AR VRD described by researchers at UNC Chapel Hill [1] accomplishes 100 degrees FOV with accommodation.

They went further with the Pinlight achieving 110 in 2014 [2]

The technical limit according to our own work for VRD FOV is H: 200o, V: 140o (Combined). So either they're ignoring work in the field intentionally cause they don't want to do VRD or they don't know about it. My guess would be the former.

[1]http://telepresence.web.unc.edu/research/dynamic-focus-augme...

[2]http://www.cs.unc.edu/%7Emaimone/media/pinlights_siggraph_20...

edit: I find this whole thing extremely frustrating. Facebook could throw 2 billion dollars at VRD tech and actually get to a working stable consumer grade system if they wanted to - everything is there for it. Why aren't they?


2 Billion? I'm ignorant on the topic, but maybe the route they are investigating is potentially cheaper?


Doubtful at scale...

I use 2B as an anchor based on the "official" oculus acquisition.


> Why aren't they?

You can say that about any other multi-billion dollar company, let it be Google, Microsoft, Xerox, Cannon, anything.

The short answer is that VR is not their core business.


Zuckerburg spent his entire keynote talking about AR - he effectively told everyone that they are a social AR company. They are investing the most into AR/VR and creating glasses etc...

How is that not what they are planning as their core business?


Might not be. Might as well be strategy to discourage others from pursuing the same while Facebook works things out eventually.

- Hey Mike this is VP John Smith, until yesterday we were interested in investing round A of $10 million in your VR startup but unfortunately we heard today that Zuckerberg plans to be deep in it, therefore I regret to inform you we will be unable to go with the round. Sorry and we wish you good luck!


I think I have about 20 of those email responses at this point!


There is a market for 5 vr systems tops


Because the requirements to do it well makes it costly, which in turn makes it a gimmick.


Why would it be a gimmick if it's costly? Or are you saying the proposed thing is gimmicky?


It won't get traction if it is costly, and without traction noone will bother developing more than demos and proof of concepts for it. And thus for the mainstream it won't have any real value.

Niche markets is another thing but not lucrative enough for a big investment.


I think the problem is fun. All it would take for Oculus to win is a World of Warcraft type success that only runs on Oculus hardware. But such gems are rare -- usually one every few years. Almost every other game fades as quickly as it comes.


Well, on top of that ridiculously expensive VR headset you still need a $1500 PC. Even if a World of Warcraft type success would run on, any, VR hardware noone would even know.

Except for that dozen hard core early adopters. World of Warcraft was only successful because of availability. Same with Minecraft - same with any imaginable success.

VR will either be available to lots of people, or good. Not both, not for a long time.


I mean the whole point of these companies is that they have the cash and clout to subsidize the future. So they could make it as cheap as they want.


I think the point is that the rift is already too expensive and adding more expensive components isn't going to help put more Oculus in people's hands.

Oculus needs to shrink their BoM.


Article headline is misleading. Eliminating nausea is not the primary benefit of this research and isn't even mentioned in the article. It may help a little with nausea in some cases but it won't eliminate it.


Thanks, we've reverted the submitted title “Oculus Research Presents Focal Surface Display. Will Eliminate Nausea in VR” to that of the article.


The intention was not to mislead. The linked paper in the article makes a reference to Vergence Accommodation Conflict.So, in a sense, it is mentioned in the article.

Link to Paper: https://scontent.oculuscdn.com/v/t64.5771-25/11162698_189852...


No one has mentioned magic leap.


Nausea in VR is already a solved problem, at least for the case where you have a stationary camera. This is for giving people an extra method of depth perception, but one which isn't strictly necessary.


Saying nausea is solved in VR is like a casual observer saying space flight is solved after the first Wright brothers flight.

I say this as a VR enthusiast who is optimistic about the industry.

You can't say it's solved until an HMD can forcefully move your viewport in-HMD without triggering nausea caused by the real body not moving.


To be fair, he did say a stationary camera.


Curious to know why you say it is a 'solved' problem. The paper linked in the article claims to have a better solution.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: