Hacker News new | past | comments | ask | show | jobs | submit login

So many questions. I can understand light reflecting into the eye from a microdisplay (using the same principle as a car HUD), but are they actually creating opaque imagery as well? How is that physically possible?

Then there's 3D spatial interaction: In my experience with Leap Motion, Kinect, and other competing tech, the level of accuracy still limits interactions to broad gestures. If they've made a big enough leap in this domain to enable precise object manipulation, that's a major achievement on its own.




Maybe it is just a two-layer panel: first layer hiding the background ...

http://gd3.alicdn.com/imgextra/i3/1069821249/TB2UIWkbXXXXXXZ...

... and the second layer glowing to show objects.

http://img1.mydrivers.com/img/20140710/c72872d7ae4242b5965e1...


I'm not sure they're using that second layer -- is there space to put optics behind that layer to focus it into the eye?


The WIRED article seems to imply that this device has outwardly-facing cameras that use Kinect-like technology to track the operator's hands, which is probably how they are able to let you interact with the projections without using some kind of wand or controller.

I could see some "high-precision" gloves being an optional accessory for this that would include some kind of tracker markings to allow even more precise controls (maybe for medical applications or something)


Not necessarily, they are likely using the glove-less Handpose technology that Microsoft Research developed.

http://research.microsoft.com/apps/video/default.aspx?id=230...


I would almost imagine gloves like this would be a requirement for precise control. The Kinect had seemingly pretty decent limb sensing from the few minutes I played with it (it was able to accurately model the bones in my fingers moving). However, it had an advantage in that it was a few feet away, viewing you straight-on.

This device will be looking nearly straight down instead, and seems to me that your limbs & fingers will often occlude what is behind them. I doubt the twin cameras used to sense depth would be far enough apart to always see your fingers behind your other arm, for example.


If you want to see similar technology that's being used now, check out the Leap Motion being used with the Oculus Rift. They mount it on the front. I've personally only got experience with the rift (I have a DK2), but I've heard good things.

I think Oculus has it right, though, in that any HMD that does positional tracking needs super low latency to feel natural. Should be a little less problematic since the whole world wouldn't lag, but I'd be disappointed if the virtual overlay had perceptible lag after using some of the better experiences on the rift.


As I understand it, the light is not merely being projected or reflected, but dispersed on a coordinate system, so the display actually illuminates where it was transparent before. That would interfere with natural light coming through (it's tinted as well) allowing the 'hologram' to obscure the real world. I could be wrong, though.


Unless there is some curious property of light I don't understand (and given my perplexity at radial polarization, there may well be), there's no way that external light coming into the glasses can be diminished by internal light emitted by the glasses.

For instance, if you're looking at a white wall in the real world, there's no way to render a black shape in front of it. You can only add luminance to it, in the same way a video projector can only add luminance to the screen it's projecting an image onto.


If the projection surface in front of the eye is also an LCD, it's possible to block out part of the background at the same time you project something onto it. I don't know what happens in this particular product, but from the Mars demo, it sounds like they can do some display of dark objects:

"The sun shines brightly over the rover, creating short black shadows on the ground beneath its legs."

It's possible that they are just getting that effect by making the Mars surface bright, but they could also be actively blocking light from the shadow regions. We'll have to wait for more details.


Certainly you can only add luminance, but consider that the goggles themselves are tinted, and the brightness of a display an inch from your eye will likely be far higher than the light bouncing off the wall. So while you can't "render" black, you should be able to simulate the darker part of the spectrum using negative space. That's not ideal, of course, but it's something.


LCD panels work by filtering out light emitted by a near-white backlight. If they are bouncing around th incoming light, they could be running it through such a panel to dynamically reduce light by color. They could then selectively add light using existing backlighting setup. Its at least theoretically possible, and I'm hoping that they've actually accomplished something like it.


This is correct. That's a big part of what Magic Leap is supposedly working on, being able to black out the background so that objects don't have the ghostly hologram look.


Yet the demo videos show some virtual objects that are darker than their background. That may be vaporware. So far, there seem to be no images on-line actually taken through the device. Has anyone seen any?

This matters. If it can only brighten things, it can only overlay bright things on top of the real world, which is what Google Glass did. Fine detail won't show up unless the background is very dark or very uniform.

If you look carefully at Microsoft's pictures, the backgrounds are subdued gray, black or brown, and free of glare. The press was forbidden to take pictures of or through the device, and their cameras and phones were confiscated for the demos. Microsoft used custom-built rooms for the demos, giving them total control over the contrast and lighting situation.

It could still work, but it's probably not going to look as good in the real world as it does in the demos.


If they're making it opaque I'm imagining that they're doing it the way I've wanted to for a transparent monitor. A second pass-through/reflective LCD just after the microdisplay is.

One like the ones on the pebble watch would allow you to selectively let light through from the outside or let it be reflective and show the microdisplay instead.


In the picture of the marscape in the article, you can faintly see an ordinary room overlaid on the marscape (most visible in the up-left part of the graphic). I took that to possibly mean that the holograms were transparent, not opaque.

Or maybe they're just trying to indicate the unreality of the marscape.


Its marketing, artistic rendition, mockup of what its supposed to look like in 3 years. Just like Project Natal had no lag and super hi resolution.

Or do you believe they somehow manage to project fullHD per eye with perfect tracking?


It may be set up like this mock up http://i.imgur.com/wD9b189.png

The OLED display facing away from the eyes, bouncing back from a secondary surface.


The demo video below specifically shows translucent as well as opaque. Watch when the woman is interacting with the rela motorcycle, she extends the height:

http://www.microsoft.com/microsoft-hololens/en-us

I'm skeptical we'll have anything this remotely usable or practical in our hands in the next 2 years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: