Site won't load for me but I read about this elsewhere. While this first gen product will surely be expensive and somewhat limited (capturing a cubic meter of light field) it's incredibly interesting to me.
I've been messing with VR stuff for the past year or so and I still think that the "killer app" in 5-10 years won't be fancier video games but more like immersive "movies" or VR telepresence. We're at that stage of the hype curve where expectations are high as gen-one products come out but while these will surely be underwhelming to many early users, I think that once depth cameras and light field cameras are developed and improved (along with the ergonomics of head-mounted displays and the speed of processors) we'll start to see more in the way of non-game applications.
Just from personal experience, playing video games in VR is cool and all but when people can watch a movie from the perspective of someone inside the scene or meet up with people in a live-scanned remote environment, I think they'll be hooked.
I'm in a wheelchair. Having someone take a future-Lytro-scanner into a building and send me the result would let me answer the "is this building accessible" question much more easily. I was hoping for this kind of thing with Google's Project Tango but it didn't eventuate.
Easy, match up light field cameras with light field displays. Think polarized 3D screens, except a billion times better - they can project the light as if the objects literally are just behind the screen.
Add eye tracking and put the screens across every surface and you can make the views follow your field of vision.
Or you use AR glasses that essentially project images in front of you.
AR = augmented reality. You still see your own world and your own spoon and bowl. You can definitely have the view projected anchored in space above the physical things you interact with.
I can hear the complaints already: "Ouch! Dropbox Sync just kicked in and lagged my AR while I was eating dinner. Now I've got hot ramen all over me... fml."
> Just from personal experience, playing video games in VR is cool and all but when people can watch a movie from the perspective of someone inside the scene or meet up with people in a live-scanned remote environment, I think they'll be hooked.
Exactly, I definitely want to feel this in the first fps movie.
Hot pursuit, Steven Seagal's fights, gunfights... It should be freaking immersive.
Well, no. That would be pretty awful in my experience. First person in VR is already dodgy if you're watching/playing from a chair and your avatar is standing or walking. Still, the only thing worse than a mismatch between your position and the avatar's position is a first-person viewpoint where you don't control the camera or movement.
Watching a movie from the first-person perspective of an actor would be one of the barfiest things you could do in VR. Ideally you're observing things from a vantage point but everything is in 3d with full 360-degree spherical range of motion. You can lean around or closer to look at something.
Now, even a mostly fixed viewpoint can be disorienting when you don't have a "body" to look down at (you feel like you're floating and disembodied) but it's probably second to the motion issues. Not sure if people will develop conventions that become generally accepted (like some sort of placeholder/amorphous body for reference) but overall, think more "fly on the wall" than "I'm Duke Nukem!"
What is going to be cool is stuff like standing right on or slightly above the field in the middle of a sports event, or right on stage at a concert next to Mick Jagger and being able to move around a bit.
Unfortunately there will need to be a globe roughly the size of the range of motion somewhere to capture the light rays... at least for now until future physics are invented.
If you dropped once of these in place of the wire cam over {insert field sport here}, millions of dollars.
You wouldn't have more than limited head tracking (or live motion without turning off tracking and turning it into a virtual window), but being able to have presence from 10' above the QB in American football for each snap?
$$$s where you start counting the zeros instead of the digit in front.
Imagine every roller-coaster video clip you've seen be filmed with this thing, every 360° video on youtube, putting these on drones, etc...
Oh, and every high security facility in the world would love these as they're capable of mapping out 3D positioning, enabling very precise face detection, etc...
And imagine live concerts sent to be displayed on VR headsets!
Also, in many cases you can likely use 4-6 in a grid to capture the entire scene.
Remember: desync'd head motion w/ virtual camera perspective motion = violent nausea.
So limit the applicable use cases to where that isn't going to happen.
The excitement about this is that it makes that no longer a concern for sufficiently small values of head motion. Read: the motion that occurs when the best of your body is not moving.
One other thing to consider is that of trauma/PTSD. Things that we find exhilarating in more traditional media may in fact be too stressful/terrifying in VR. Horror experiences in particular seem to be proving this out, but I could see the same for the type of extreme violence that currently seems fun or exciting onscreen.
Example: Movie The Road was quite true to the book, but much more viscerally horrifying. A VR "you're there" version would indeed be significantly ... less fun.
I think the opposite actually. First person perspective of someone else will be the least exciting way to do a movie. I think it will be more like a really, really awesome guided tour of the movie's sets as the plot progresses. You're there, doing your own thing, examining what is most interesting to you, as the plot unfolds around you.
I hope for Lytro's sake that they release real open-source playback tools. AFAICT, if you have a first-generation Lytro camera right now, you're pretty much stuck viewing the pictures you take in a Flash app, and I'd be very hesitant to buy a camera of any sort that gives no real assurance that the picture format will still be usable in ten (or twenty or fifty) years.
Everyone has access to a JPEG decoder. Lytro, not so much.
Lytro switched away from Flash to a JPEG/JavaScript player in 2013, then to a WebGL player in late 2014. There were some open source player controls floating around last year but I can't recall the name. There were also some open source tools to extract the JPEG stack from the .LFP file as well, which you could use to build a player of your own. I worked at Lytro from 2012-2015 and we'd see occasional updates from those tools.
Is it really a JPEG stack? I had assumed there was more magic (i.e. math and clever compression) than that. Or is a JPEG stack with extra fanciness?
I'd love to read up on how this stuff actually works. I know how to calculate the theoretical information capacity in a light field (hint: very very dense) but basically nothing about how people manage to munge it into something compact and useful.
Well, the raw picture (light field) from the camera looks like something from an insect's eye - a hexagonal grid of tiny pictures saved into one giant jpeg [1]. The Lytro software processes that data into a stack of jpegs of reconstructed views at varying focal lengths when you first import the image.
They were founded in 2006, and their first generation cameras were being unloaded on Woot last year, and the there was a flash sale on their second generation yesterday. It'd be great if they finally recognized that a proprietary format was blocking their success, but...
> It'd be great if they finally recognized that a proprietary format was blocking their success, but...
FWIW, I also think that Lytro has been seriously off the mark in how they market the output of their cameras: as weird black-boxes for end-viewers to play with. IMO, the idea that an audience of image viewers want to fiddle with the light-field knobs themselves (e.g. post-shot focus) is actively hurtful of Lytro's products.
The holy grail here is a light field camera that obsoletes the need for focus the way that digital cameras have obsoleted the need for color filters used in B&W film photography[1]. Unfortunately, Lytro's first product was well before its time. Their tech didn't have adequate still-image resolution compared to even a halfway-decent phone camera, certainly not enough to make the focus vs quality tradeoff make much sense vs conventional cameras.
Lytro's approach is worth comparing to Apple's Live Photos in the iPhone 6s. Live Photos clearly aren't for every shooting situation, but AFAICT, they're becoming quite popular for what I'll dub "affective photography": moments of kids, pets, friends, etc. We'll see how well this feature lasts past its novelty phase, but it's got a far better hook into what motivates end-users than Lytro has ever had.
[1] A quick primer: colored filters were used with black and white film to change the relative contrast of different objects in the scene, e.g. sky vs skin tones vs grass, etc. A skillful photographer learned to look at the colors in a scene and pick a filter that would produce the desired B&W rendering. Converting color digital images to B&W allows these rendering decisions to be pushed into post, with more creative freedom than was ever possible with film (or the odd, rare B&W digital sensor).
This is pretty exciting - it looks like Lytro finally found a good use for light-field photography. Trying to shorehorn the concept into a camera product was never going to take off - as admirably as the company tried, refocusing an image in post was ultimately proven a novelty that the market was never going to consider worth all the trade-offs in resolution and convenience.
This has the look of a perfect pivot: Capturing light fields could turn out to be the key to VR video, and Lytro is poised to make the ARRI Alexa of the industry.
this is the first time I have ever heard of "light-field photography" and I am at a loss as to why this is exciting or a critical technology for VR. I don't get it. Why is having a grid of slight perspective shifts better than just have two perspectives for each eye?
This looks and sounds incredibly cool, but I have no real idea what it does.
Could someone with domain knowledge give me a bit of a high level explanation?
I have inferred that it's something you use while shooting footage to capture how light is present in the room at all of the different possible angles. Is this then used to be able to generate new scenes that VR headsets can walk around and interact with? Or is this more for being able to add computer generated images into the room and light it properly?
It's more the latter, it doesn't actually capture the image like a camera would. Instead it captures and records every single light ray and reconstructs the environment based upon that.
That would then allow you to overlay computer generated images onto basically a heightmap that has 100% accuracy to the real world.
Actually, he was right with his first guess. This isn't about capturing film with a heightmap like RGBd cameras do, it's about capturing all light that goes through a volume of space from all angles. If you can fulyl capture all light that enters a volume of space, you can accurately recreate a view from any point in that space. So, in VR I can move my head around, both by turning my head around and moving forward, backward and side to side and accurately see any view.
This does also allow adding CG elements, but it's real strength is being able to capture enough of the light coming through so that you can almost 'teleport' to where the video was recorded.
The editor informs me that HN has crushed his site, though. Should be back up soon, I hope. (Good argument for mirroring my own articles on my otherwise-defunct blog, I suppose.)
uploadvr_will, you've been marked as [dead], probably due to spam filtering on new accounts. Most users won't be able to see your posts, but you should be able to contact the admins to get it fixed.
I think DanBC inadvertently gave the wrong (first) link for the original announcement of this feature. It was https://news.ycombinator.com/item?id=10298512. The link Dan gave was when we announced some other stuff, that you also should know about. :)
This is very cool technology. The challenge it highlights is that there is no opportunity to have lighting or grips or special effects "out of frame" because the entire visible space is "in-frame". All those shots that of untouched wilderness or of actors pretending to drive or getting yanked around on wire harnesses become much much harder since you'll need to create a believable 3d representation of whatever traditionally would not be visible.
Where does the camera man sit and how does he/she move the camera?
This basically forces you into stable shots only which are far less compelling. Ring type VR cameras allow the cameraman to sit in the middle of the ring while moving the scene for far more interesting results.
You do not want a moving camera for VR cinema, especially not if you are using a lightfield camera. If camera moves, the user is struck with the nauseating illusion that space is moving around them.
Put the camera on a dolly or other moving platform. We've been doing that with movie cameras for quite awhile. Now add remote control, which is probably also common in the film industry, for example to control those swooping crane shots.
Really cool. I guess none of the current display tech (aside from Magic Leap) are not able to display light fields properly though? I mean natural DOF depending on where your eyes are focused on
That hardware adds variable focal length to each pixel. But, even without that, simply being able to move your head around inside the captured scene with the current tech is 90% of the effect.
I've been messing with VR stuff for the past year or so and I still think that the "killer app" in 5-10 years won't be fancier video games but more like immersive "movies" or VR telepresence. We're at that stage of the hype curve where expectations are high as gen-one products come out but while these will surely be underwhelming to many early users, I think that once depth cameras and light field cameras are developed and improved (along with the ergonomics of head-mounted displays and the speed of processors) we'll start to see more in the way of non-game applications.
Just from personal experience, playing video games in VR is cool and all but when people can watch a movie from the perspective of someone inside the scene or meet up with people in a live-scanned remote environment, I think they'll be hooked.