I played with the large size version of Looking Glass at AR in Action conf this January at MIT. It's really cool, and the coolest part is the interaction is intuitive. You really can just touch a point on the hologram and have it react at that point.
As others have mentioned, it requires pushing quite a number of simultaneous views of the hologram through a hidpi display, the experienced resolution is not very high as a result and the holograms look a bit fuzzy.
It is right now probably the best out-of-the box way to interact with Holograms, especially in a shared environment. Hololens can't share holograms by default, and even if the app has implemented sharing, the holograms can't be touched. Meta glasses have some touchability thanks to their depth sensor, but again there's no easy shared way to interact with a hologram.
I think AR like Looking Glass is underrated, they were very smart to use natural interaction instead of gestures or a mouse/wand. That being said, I don't see it competing with AR glasses longterm.
if it is stacked screens, I don't see any reason it will work horizontally only. I believe the view angle is still very limited. I'd love to use this for Skype chat or something.
If the lenticular lens is made of columns (as those fake 3D postcards are) then only horizontal shift is 3D-able. To display verticality, the lenticular lenses could be hemispheres. However, to encode vertical 3D-ness you would need 45 images for each 2 degrees off the horizontal so as your head bobs up and down, it can pick up the off axis images. So that's 45 images x 45 axes = 2025 images per frame.
These lenticular autostereoscopic displays have been around for a long time and never quite took off. There have been even 3D TVs (Philips) using this idea. I am not quite sure what is "new" apart of yet again abusing the "holographic" term (hint, it has zero to do with holograms or holography).
The major issues with these are the limited viewing angles and the enormous bandwidth needed to both render the individual points of view and to actually transfer them to the screen. Heck, a lot of computer games have problems to generate stereoscopic (i.e. 2 images) content at 60 or 90fps required by the VR helmets such as Rift or Vive these days. And these guys want to push 45 distinct images at 60fps?
Good luck with that, especially for that ridiculous price for a tiny screen.
This isn't quite the same as what you're talking about. The lenticular displays only vary in the x-dimension, but this display seems to work in both dimensions, which would probably create a noticeably better 3D effect.
Yes the problem with light-field displays (and cameras) is that they do need massive bandwidths.
But there are ways to mitigate it somewhat. It's possible to have a gaussian or random distribution of light rays so that you end up with only average about 10 or so rays per pixel, instead of the 45 here.
But yes, expect to need a massive increase in bandwidth for light-field holographic displays. This includes headset VR mounts, where you can finally have a display without the limited field-of-vision that you'd have from current VR headsets. A VR headset can focus the light rays towards the range of each eye, which can also cut down on bandwidth.
Which is why I said you need to distribute the rays around randomly in a gaussian distribution. Don't just arrange them in perfect rows and columns - that's the worst array option that guarantees aliasing. You can shift each pixel (or subpixel) lens around slightly in a permanent pattern.
Anti-aliasing techniques are common elsewhere in 3-D graphics, and can be used just as well in light-fields.
That assumes your pixel distribution has much higher potential addressable resolution than the signal you care about... in which case, why not just display the full raster?
Rendering for 2D and display are two different beasts -- but I'll own the fact that I'm not formally trained in this and there may be subtleties -- or even obvious signal processing facts -- that I'm getting wrong. (But if I'm wrong, I'd love to know how, for my own edification.)
> That assumes your pixel distribution has much higher potential addressable resolution than the signal you care about.
It doesn't need that at all. Light fields can be interpolated like anything else, just like bayer filter for color on camera sensors or 4:2:2 color compression on the signal side. But if you're doing 3-D rendering, you can match rays exactly to your distribution on the display if the renderer knows the distribution of rays on the display.
Interpolation is always going to reduce quality, but it's better than aliasing, so there's going to be a trade-off analysis that needs to be done. I don't know what the results of that would be, so this is all theoretical.
With all due respect, interpolating light fields is far less trivial than you make it out to be. It's a 4D field, and naive interpolation leads to loss of detail, and often edge doubling (itself a form of aliasing).
Furthermore, if you're interpolating rays, you're necessarily not doing what you originally proposed, which is to only light up a (random or pseudorandom or evenly distributed) subset of the pixel display elements, presumably to save on rendering cycles.
Let me just say, more generally, that intuition trained on 2D doesn't apply directly to light fields.
Compression is needed and probably not difficult given that not much extra information is rendered comparing to normal display given the limited view angle. Probably only twice as much as bandwidth needed if compression is done properly.
"I think most people don’t want this 1984 vision of the future, where everyone is geared up 16 hours a day."
I assume the author means the book '1984' by George Orwell and I come to the conclusion that the author has never read the book and does not know what it is about. Not every dystopian story is 1984. A far more logical reference would have been to 'Ready Player One' by Ernest Cline which is in fact about a dystopia in which people _are_ 'geared up' all the time.
I recommend both books, the first is a work of genius and is, sadly, very relevant today and the second because it is very entertaining and might offer insights into the development of VR in the near future.
It's not a hologram, of course. It's a stack of flat displays.
That's been tried before, in many ways. The first try was a vibrating mirror.[1] There's a flat rotating mirror system from FakeSpace.[2] It's not bad; you can walk around it. Move vertically, though, and the illusion breaks down. There's a scheme with gas ionized by intersecting laser beams. That's very low-rez, but truly volumetric.
Eventually, someone may come up with a real hologram system with decent resolution. A research group at MIT built one, but it was very low resolution and single-color. It's not impossible. But this isn't it.
I'm pretty impressed by what FOVI3D [1] has been doing w/ their light field displays. Here's a recent interview from SID Display Week this year [2] that's refreshing because the CTO (in the interview) goes into some of the details of the challenges they face and isn't unrealistic about how hard it'll be to overcome them.
There's also multi-planar rear-projection screens by Lightspace[1], they essentially have a stack of liquid crystal diffusers (20 or so) that can switch between transparent or diffusing state.
Holography is a very particular set of 3d image rendering technique. This volumetric display is not holographic. To me it's a frustration. Microsoft's highjacking of the word holography is also frustrating. Such abuse does not server the product and it's inventors.
Truly holographic displays will emerge once we can control ligth interference in the display.
Volumetric display isn't as appealing as holographic when it comes to marketing. And masses often don't care about details, they just want to be thrilled that each day technology brings us closer to Star Wars-like holograms.
Not the poster above, but I will try a short explanation.
You see an object, because light goes from its surface (either emitted or reflected ambient light) to your eyes. You see it in 3 dimensions, because the light differs by the viewing angle. Holography is a technology of recording the light "emitted" by the object. The light has to be recorded in direction and intensity. This is done via interference between the object light (waves) and a reference light wave, which usually is a planar wave of light.
This interference creates an interference pattern, which can be recorded by film. The trick is, that if you develop the film to get the black and white pattern, you can shine the reference wave onto the film and it interacts with the interference pattern such that the object wave is reconstructed. A hologram such is an exact recording of the light emitted by the object. This is something the display tries to emulate by offering 64 different images, but not quite the same. As the interference pattern is just a greyscale image, one could use an ultra-high resolution lcd display to synthesize that - there have been demonstrators of that, but I am not aware of a large holographic display so far.
For the way you described it, to me, it doesn't sound that hard.
- Create interference pattern
- Record interference pattern
- Shine ref light onto pattern to recover emitted light
If done perfectly would this yield a convincing hologram (what I might try to describe as "visual sense impression of real 3D object being present" ) ?
What are the major limitations in the current technology? You said it might be possible for high end screen, does that mean there is hologram tech out there? I have never seen any demonstrated.
Is there some sort of information processing problem that software could solve on the interference pattern? Or is this more a physics problem -- maybe we do not have the materials that can do the steps required?
The catch is, that the patterns have the resolution of the wavelength of light. You cannot take plain black and white film for this, you would need extremely high resolution film material. Agfa actually produced special film for holograms for a while, high contrast and very high resolution, but with a sensitivity like ISO 10. The resolution is far higher than normal LCD screens offer and that would be the main impediment. The math for calculating the interference patterns is almost trivial, but the scale is massive. Doing it in real-time would still require quite high-power setups, as you would have to calculate hundreds of rays per "pixel".
I've played around with some of Looking Glass's earlier prototypes and was impressed with the effect. It's one of those technologies like VR - very different in-person than seeing it in a video.
I've wanted to watch a Broadway play in my living room that was indistinguishable from being in the theater for close to thirty years. I'm excited because this is the closest technology that I've seen that could make this possible.
But I'm saddened if this is really the Apple II version of this technology because if it takes another forty years I probably won't live to see it. Always imagined that there would be sensors on the floor and ceiling, not watching it in a glass box but if that's how it has to be I'm OK with it.
In 40 years since Apple II, technology has doubled roughly 26 Moore's, or moresies.
The same shift now-a-days should take only a year and half (40 / 26), which is about how long it would take if you bought 64 Looking Glasses and built a DIY array, and used ML to construct a 3D video from a 2D one, a la https://en.wikipedia.org/wiki/3D_reconstruction_from_multipl...
"The Looking Glass generates 45 distinct views of a three-dimensional scene"
Now that GPUs can reliably generate 60 FPS, this is the next step to push that technology. Because you'll need 45x60FPS for the same quality. And then you'll push the 45 number higher.
(yes, I know the 60 isn't visible and has to do with control input).
I think it's a little more complex than that. In addition to rasterizing a large number of pixels, the entire vertex pipeline has to run 45 times to generate 45 projections of the scene's geometry. You're correct though in that the rest of what happens in a frame (physics, animations & other state updates) do not have to run 45x.
This is an application where ray tracing for primary rays should shine. Instead of having to project thw scene 45 times you only need 45 sets of ray bundles, which is really efficient. The acceleration stuctures are shared between views. With a few pixel reordering hacks you can essentially generate all viewpoints as a single high resolution frame.
You're right. This is essentially meaningless: "Now that GPUs can reliably generate 60 FPS ...."
That depends entirely on what you're rendering. You could reliably hit over 60 FPS for decades—for some content. And now you can still render at .000000001 FPS for other content.
Saying "GPUs can generate 60 FPS" is presenting the situation as if framerate were a function of hardware only, whereas the reality of the situation is that it depends at least as much on the software.
Yeah, this is total cart before the horse. Cards aren't built to target specifically 60fps. They're built to perform as many operations as possible. Then content creators push the card as hard as they can while attempting to maintain 60fps. The graphics engine authors out there are always clamouring for more ops with active plans of how to use them. So any extra memory of ops the GPU makers provide will almost immediately be used up. Further complicating things, display technology is always advancing, adding more pixels to render.
I remember this moment when Crytek released a real-time raytraced demo of one of their games running on the best hardware avaipat at the time. It felt like, finally the hardware is capable and now it's a slow march to the end of raster graphics. Then 4K displays came along and totally exploded the number of pixels to render and that was pretty much the end of that talk, at least for a couple decades.
I should have said something different and not even mentioned a specific framerate. What I meant to say is that GPUs now-a-days are able to most rendering with ease, but the product in this link will near a lot more.
For one, prerendered 3D video could go as detailed as the transfer pipe & display would allow.
For another, the worst case cost is $x/45, but I would think that might improve as 3D programmers figure out optimizations in terms of multi angle view rendering.
You’d only render angles from which anyone was looking though? So with a single observer it’s only 2 angles (one per eye) just like VR. This of course assumes perfect tracking of the eyes of any observer
This is a light field display, which doesn't require any tracking of the users' eyes, which allows for a simpler setup and multi-user experiences but at the cost of needing to render lots of viewpoints.
My first inclination was to dismiss it as a gimmick, and really it is a gimmick, but it is so damn cool that I want one, especially if the Api's are relatively good.
Even if it's a bit of a gimmick in its current state, it's a great step towards something truly useful. Technology like this could be amazing in fields like medicine and education.
If you were smart you could encode multiple views into a single pixel to get a higher resolution, but then I figure that would limit your colour depth availability.
This is only really valuable for multi-user scenarios. Just going with head tracking would likely be much easier to introduce, as you don't need specialized hardware (albeit it helps to improve quality). I remember this from a years and years old demo made on PlayStation I think, anyhow here are top results on this tech from Google:
I can immediately imagine how several doctors could use such a display for e.g. a tomogram or stereo-microscope output during a surgery, or something like that.
For us the DVI bandwidth was the limiting factor to deliver 40-50 views at reasonable framerates (besides raw GPU computing power), so our display actually had 8(!) DVI inputs. That also gave us a natural interface to add distributed rendering, supporting up to 8 GPUs for rendering. In most cases though, one monster PC with 3 GPUs and 5 DVIs was enough to produce interactive framerates.
Sounds like a cool idea but if you use the first name that comes to mind you’re bound to not be the first one who thought of it. So there already are a ton of things called Looking Glass.
this is similar to what I imagined when I first read the initial press release for Magic Leap (years ago now) albeit on a smaller scale. This looks far more promising for the entertainment industry than vr/ar goggles. My only nit is that they didn't use the dancing baby (from ~1998/99) for the demo video
Pretty cool, and probably will be useful in certain industries, but the title is extremely clickbait-y. The movie holograms that are listed don't live inside a glass box.
As others have mentioned, it requires pushing quite a number of simultaneous views of the hologram through a hidpi display, the experienced resolution is not very high as a result and the holograms look a bit fuzzy.
It is right now probably the best out-of-the box way to interact with Holograms, especially in a shared environment. Hololens can't share holograms by default, and even if the app has implemented sharing, the holograms can't be touched. Meta glasses have some touchability thanks to their depth sensor, but again there's no easy shared way to interact with a hologram.
I think AR like Looking Glass is underrated, they were very smart to use natural interaction instead of gestures or a mouse/wand. That being said, I don't see it competing with AR glasses longterm.