Hacker News new | past | comments | ask | show | jobs | submit login
Lytro announces light field VR video camera (uploadvr.com)
156 points by ryandamm on Nov 5, 2015 | hide | past | favorite | 72 comments



Site won't load for me but I read about this elsewhere. While this first gen product will surely be expensive and somewhat limited (capturing a cubic meter of light field) it's incredibly interesting to me.

I've been messing with VR stuff for the past year or so and I still think that the "killer app" in 5-10 years won't be fancier video games but more like immersive "movies" or VR telepresence. We're at that stage of the hype curve where expectations are high as gen-one products come out but while these will surely be underwhelming to many early users, I think that once depth cameras and light field cameras are developed and improved (along with the ergonomics of head-mounted displays and the speed of processors) we'll start to see more in the way of non-game applications.

Just from personal experience, playing video games in VR is cool and all but when people can watch a movie from the perspective of someone inside the scene or meet up with people in a live-scanned remote environment, I think they'll be hooked.


I'm in a wheelchair. Having someone take a future-Lytro-scanner into a building and send me the result would let me answer the "is this building accessible" question much more easily. I was hoping for this kind of thing with Google's Project Tango but it didn't eventuate.


Working hard to get the site back up and running as we speak.

EDIT: Site is live now!


I like to watch movies with friends, how do we do it in VR?

I like to watch movies while eating dinner, how do I do it in VR?


Easy, match up light field cameras with light field displays. Think polarized 3D screens, except a billion times better - they can project the light as if the objects literally are just behind the screen.

Add eye tracking and put the screens across every surface and you can make the views follow your field of vision.

Or you use AR glasses that essentially project images in front of you.


How do I shovel food into my maw when I'm wearing VR goggles? What happens when my spoon misses the bowl?


AR = augmented reality. You still see your own world and your own spoon and bowl. You can definitely have the view projected anchored in space above the physical things you interact with.


I can hear the complaints already: "Ouch! Dropbox Sync just kicked in and lagged my AR while I was eating dinner. Now I've got hot ramen all over me... fml."


There's transparent screen AR, like with HoloLens and Meta glasses from http://www.spaceglasses.com


> Just from personal experience, playing video games in VR is cool and all but when people can watch a movie from the perspective of someone inside the scene or meet up with people in a live-scanned remote environment, I think they'll be hooked.

Exactly, I definitely want to feel this in the first fps movie.

Hot pursuit, Steven Seagal's fights, gunfights... It should be freaking immersive.


Well, no. That would be pretty awful in my experience. First person in VR is already dodgy if you're watching/playing from a chair and your avatar is standing or walking. Still, the only thing worse than a mismatch between your position and the avatar's position is a first-person viewpoint where you don't control the camera or movement.

Watching a movie from the first-person perspective of an actor would be one of the barfiest things you could do in VR. Ideally you're observing things from a vantage point but everything is in 3d with full 360-degree spherical range of motion. You can lean around or closer to look at something.

Now, even a mostly fixed viewpoint can be disorienting when you don't have a "body" to look down at (you feel like you're floating and disembodied) but it's probably second to the motion issues. Not sure if people will develop conventions that become generally accepted (like some sort of placeholder/amorphous body for reference) but overall, think more "fly on the wall" than "I'm Duke Nukem!"


Exactly.

What is going to be cool is stuff like standing right on or slightly above the field in the middle of a sports event, or right on stage at a concert next to Mick Jagger and being able to move around a bit.

Unfortunately there will need to be a globe roughly the size of the range of motion somewhere to capture the light rays... at least for now until future physics are invented.


If you dropped once of these in place of the wire cam over {insert field sport here}, millions of dollars.

You wouldn't have more than limited head tracking (or live motion without turning off tracking and turning it into a virtual window), but being able to have presence from 10' above the QB in American football for each snap?

$$$s where you start counting the zeros instead of the digit in front.


Imagine every roller-coaster video clip you've seen be filmed with this thing, every 360° video on youtube, putting these on drones, etc...

Oh, and every high security facility in the world would love these as they're capable of mapping out 3D positioning, enabling very precise face detection, etc...

And imagine live concerts sent to be displayed on VR headsets!

Also, in many cases you can likely use 4-6 in a grid to capture the entire scene.


Remember: desync'd head motion w/ virtual camera perspective motion = violent nausea.

So limit the applicable use cases to where that isn't going to happen.

The excitement about this is that it makes that no longer a concern for sufficiently small values of head motion. Read: the motion that occurs when the best of your body is not moving.


Yeah. Everyone in the whole world will simultaneously be able to have better than the current best seats in the house.

Courtside? 50 yard line? Right by the mid tower in Dota2?


Won't there be issue in the ball from {sport} hitting the camera?


Not if they do it at the current height. I know for NFL they've already got a wire cam. https://en.m.wikipedia.org/wiki/Skycam#

And I'd imagine a fair amount of the current height is not due to interference, but rather getting the best perspective.


One other thing to consider is that of trauma/PTSD. Things that we find exhilarating in more traditional media may in fact be too stressful/terrifying in VR. Horror experiences in particular seem to be proving this out, but I could see the same for the type of extreme violence that currently seems fun or exciting onscreen.


Example: Movie The Road was quite true to the book, but much more viscerally horrifying. A VR "you're there" version would indeed be significantly ... less fun.


I think the opposite actually. First person perspective of someone else will be the least exciting way to do a movie. I think it will be more like a really, really awesome guided tour of the movie's sets as the plot progresses. You're there, doing your own thing, examining what is most interesting to you, as the plot unfolds around you.


I hope for Lytro's sake that they release real open-source playback tools. AFAICT, if you have a first-generation Lytro camera right now, you're pretty much stuck viewing the pictures you take in a Flash app, and I'd be very hesitant to buy a camera of any sort that gives no real assurance that the picture format will still be usable in ten (or twenty or fifty) years.

Everyone has access to a JPEG decoder. Lytro, not so much.


Lytro switched away from Flash to a JPEG/JavaScript player in 2013, then to a WebGL player in late 2014. There were some open source player controls floating around last year but I can't recall the name. There were also some open source tools to extract the JPEG stack from the .LFP file as well, which you could use to build a player of your own. I worked at Lytro from 2012-2015 and we'd see occasional updates from those tools.


Huh, guess I'm out of date here.

Is it really a JPEG stack? I had assumed there was more magic (i.e. math and clever compression) than that. Or is a JPEG stack with extra fanciness?

I'd love to read up on how this stuff actually works. I know how to calculate the theoretical information capacity in a light field (hint: very very dense) but basically nothing about how people manage to munge it into something compact and useful.


At its core, the viewer works by just averaging the JPEG stack at varying offsets as described in this student project: https://inst.eecs.berkeley.edu/~cs194-26/fa14/upload/files/p...

I haven't looked into how they make everything efficient, but the source code for their flash viewer is here http://lightfield.stanford.edu/aperture.html


Well, the raw picture (light field) from the camera looks like something from an insect's eye - a hexagonal grid of tiny pictures saved into one giant jpeg [1]. The Lytro software processes that data into a stack of jpegs of reconstructed views at varying focal lengths when you first import the image.

[1] see for example http://lightfield-forum.com/2012/07/lytro-hack-how-to-extrac...


The Light Field Toolbox by Donald Dansereau allows you to view and modify Lytro Light fields and is open source, I've used it before.

http://www.mathworks.com/matlabcentral/fileexchange/49683-li...


Hopefully it's just a timing issue and not control play.


They were founded in 2006, and their first generation cameras were being unloaded on Woot last year, and the there was a flash sale on their second generation yesterday. It'd be great if they finally recognized that a proprietary format was blocking their success, but...


> It'd be great if they finally recognized that a proprietary format was blocking their success, but...

FWIW, I also think that Lytro has been seriously off the mark in how they market the output of their cameras: as weird black-boxes for end-viewers to play with. IMO, the idea that an audience of image viewers want to fiddle with the light-field knobs themselves (e.g. post-shot focus) is actively hurtful of Lytro's products.

The holy grail here is a light field camera that obsoletes the need for focus the way that digital cameras have obsoleted the need for color filters used in B&W film photography[1]. Unfortunately, Lytro's first product was well before its time. Their tech didn't have adequate still-image resolution compared to even a halfway-decent phone camera, certainly not enough to make the focus vs quality tradeoff make much sense vs conventional cameras.

Lytro's approach is worth comparing to Apple's Live Photos in the iPhone 6s. Live Photos clearly aren't for every shooting situation, but AFAICT, they're becoming quite popular for what I'll dub "affective photography": moments of kids, pets, friends, etc. We'll see how well this feature lasts past its novelty phase, but it's got a far better hook into what motivates end-users than Lytro has ever had.

[1] A quick primer: colored filters were used with black and white film to change the relative contrast of different objects in the scene, e.g. sky vs skin tones vs grass, etc. A skillful photographer learned to look at the colors in a scene and pick a filter that would produce the desired B&W rendering. Converting color digital images to B&W allows these rendering decisions to be pushed into post, with more creative freedom than was ever possible with film (or the odd, rare B&W digital sensor).


It'd be great if they put some resources into this, I agree - and it would have helped to open up their platform.


This is pretty exciting - it looks like Lytro finally found a good use for light-field photography. Trying to shorehorn the concept into a camera product was never going to take off - as admirably as the company tried, refocusing an image in post was ultimately proven a novelty that the market was never going to consider worth all the trade-offs in resolution and convenience.

This has the look of a perfect pivot: Capturing light fields could turn out to be the key to VR video, and Lytro is poised to make the ARRI Alexa of the industry.


this is the first time I have ever heard of "light-field photography" and I am at a loss as to why this is exciting or a critical technology for VR. I don't get it. Why is having a grid of slight perspective shifts better than just have two perspectives for each eye?


The viewer's head isn't stationary, so you have to have a perspective for each possible eye location.


There seem to be a lot of questions on what a light field is. Lytro just put up a great blog post that gives a good introduction to them http://blog.lytro.com/post/132599659620/what-is-light-field



Site is giving me a 503.

Video here https://vimeo.com/144034085


Bullshit ends and content starts at 1:10.


This looks and sounds incredibly cool, but I have no real idea what it does.

Could someone with domain knowledge give me a bit of a high level explanation?

I have inferred that it's something you use while shooting footage to capture how light is present in the room at all of the different possible angles. Is this then used to be able to generate new scenes that VR headsets can walk around and interact with? Or is this more for being able to add computer generated images into the room and light it properly?


It's more the latter, it doesn't actually capture the image like a camera would. Instead it captures and records every single light ray and reconstructs the environment based upon that.

That would then allow you to overlay computer generated images onto basically a heightmap that has 100% accuracy to the real world.

At least that's what I got from the video.


Actually, he was right with his first guess. This isn't about capturing film with a heightmap like RGBd cameras do, it's about capturing all light that goes through a volume of space from all angles. If you can fulyl capture all light that enters a volume of space, you can accurately recreate a view from any point in that space. So, in VR I can move my head around, both by turning my head around and moving forward, backward and side to side and accurately see any view.

This does also allow adding CG elements, but it's real strength is being able to capture enough of the light coming through so that you can almost 'teleport' to where the video was recorded.


Seems roughly analogous to the holographic universe idea where all the information in a space is captured on the surface....


You're spot on. Holograms, as in the images like on a credit card, do actually record light fields.


If you want a dense, technical explanation, I wrote about this hypothetically back in April, for the same site:

http://uploadvr.com/what-the-bleep-is-lytro-doing-in-vr/

The editor informs me that HN has crushed his site, though. Should be back up soon, I hope. (Good argument for mirroring my own articles on my otherwise-defunct blog, I suppose.)


It crushed it for a bit, but now it is back and strong


uploadvr_will, you've been marked as [dead], probably due to spam filtering on new accounts. Most users won't be able to see your posts, but you should be able to contact the admins to get it fixed.


If you click the timestamp you should be taken to a page that has a "vouch" link. So non-mods can now vouch for dead posts.

Here's the announcement: https://news.ycombinator.com/item?id=10223645

And here's a follow up: https://news.ycombinator.com/item?id=10478886


Neat, I missed that! Thanks for the heads up.


I think DanBC inadvertently gave the wrong (first) link for the original announcement of this feature. It was https://news.ycombinator.com/item?id=10298512. The link Dan gave was when we announced some other stuff, that you also should know about. :)


This is very cool technology. The challenge it highlights is that there is no opportunity to have lighting or grips or special effects "out of frame" because the entire visible space is "in-frame". All those shots that of untouched wilderness or of actors pretending to drive or getting yanked around on wire harnesses become much much harder since you'll need to create a believable 3d representation of whatever traditionally would not be visible.


Where does the camera man sit and how does he/she move the camera?

This basically forces you into stable shots only which are far less compelling. Ring type VR cameras allow the cameraman to sit in the middle of the ring while moving the scene for far more interesting results.

EX: http://www.nytimes.com/newsgraphics/2015/nytvr/


You do not want a moving camera for VR cinema, especially not if you are using a lightfield camera. If camera moves, the user is struck with the nauseating illusion that space is moving around them.


Have you got an image of the NYT camera rig? AFAIK their cameramen gets out of the way too, and the rig sits on a tripod.

http://www.nytimes.com/2015/11/08/magazine/virtual-reality-a...


Put the camera on a dolly or other moving platform. We've been doing that with movie cameras for quite awhile. Now add remote control, which is probably also common in the film industry, for example to control those swooping crane shots.


Why couldn't he wear the sphere like a helmet? (At least in principle)


If this opens up a possibility to (easily) isolate objects per depth, as in deep compositing, then it would be worth the price of admission alone.


Really cool. I guess none of the current display tech (aside from Magic Leap) are not able to display light fields properly though? I mean natural DOF depending on where your eyes are focused on


Nvidia had a prototype a while back http://lightfield-forum.com/light-field-camera-prototypes/nv...

That hardware adds variable focal length to each pixel. But, even without that, simply being able to move your head around inside the captured scene with the current tech is 90% of the effect.


can you only move your head "within" the bounds of the sphere?


You can extend beyond, but it will either fade to black or show up with much, much lower resolution


thanks!


According to them you have about 1 cubic metre to move around in


thanks!


So that's why their older cameras are selling at a heavy discount on Amazon. I thought something was coming around the corner.


Here is a "handmade" implementation of this concept made with normal cameras.

https://home.otoy.com/otoy-demonstrates-first-ever-light-fie...

edit: it's actually mentioned in the article which I couldn't see at the time...


That's cool. Do you know if they have a version I could try out in the oculus? It just looks like a utube video on that site.


"demo video" is defined in small print as artistic impression and conceptual rendering. Dont get too hyped.


Hmm I'll believe this when I see it (and not the ridiculously faked demos in this video).


I want one.


lytro is and always will be novelty shenanigans.

until they open up their file standards for real and get some open source project going, it will continue to be ignored by every pro and enthusiast.


Yes, no pro works with RED or ARRI cameras due to their closed file formats and lack of open source projects.


Aren't proprietary RAW formats pretty much standard for cameras?

The difference at this point is that I can use the RED format with industry standard software.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: