And the thing about technology is, it keeps moving forward.
I was remarking the other day about some neighbors who caught the folks who broke into their house because a person across the street had a 1080p video camera which caught them going over the side fence (and fortunately facing the street).
And in my personal experience of "Imagine you could play a game where the graphics card would realistically render the battle from any angle in real time! No way, too much compute needed." where I really missed the notion that multi-core GPUs would grow like they had.
What this research says is that when you have pictures of people from high resolution cameras, that have high dynamic range, you will be able to pull out faces of people near them from corneal reflections. That won't be today obviously, it might not be five years from now, but don't count it out as "never."
It seems there would be a point at which you reach maximum information density with camera sensors. Not due to limitations of technology, but the nature of light itself and the optics.
Optics is nowhere near my specialty, but it does seem to me that you can only capture so much light with a lens, no matter how optically perfect it is, before diffraction causes you to lose information. Of course, bigger / more numerous lenses & sensors would alleviate this, but then that's probably not going to be stealthy. So, I don't see how improved technology (a sudden breakthrough in how light works notwithstanding) will fix this.
Please, could anybody in optics weigh in on this, perhaps with some math? I'm very much curious what the upper limit would be for a typical piece of 3" glass.
Diffraction limits are most visible for small apertures (high f-numbers). For surveillance cameras, I would expect* sensor size to be the bigger issue, because as long as sensor size is fixed, signal to noise will be limited by photon noise (the noise created by the randomness of photons themselves), no matter how good the sensor is.
Realistically, if this is a goal, then manufacturers will just increase the sensor size until the goal is met. This isn't a cell phone, so the sensor size constraints are pretty forgiving for this platform.
* Note: I am no optics expect, this is just what I've picked up on my own.
It's incredibly tiny. As I understand it they etch individual pixels onto it, each of which can only receive light from a specific angle. As you make it larger and larger you can absorb more and more light at higher and higher resolutions, without dealing with lenses.
Although even lenses can do insane things like magnify individual cells to be human visible, or take pictures of distant galaxies. I don't know why everyone is so skeptical of merely photographing someone's eye.
Holographs are made right down to the spatial resolution of light, the interference patterns between light-as-a-wave from two different angles are what's laid down. So you could do a lot better spatially than what we do today.
Typical cell spacing is between 3 and 10 micro meter right now iirc and red light is around 650 nm, so about a factor of 5 to 10 or thereabouts.
Biggest enemies are not incident light quantity but thermal noise, as you get smaller in cell size that becomes a real problem (you can alleviate that with cooling).
> It seems there would be a point at which you reach maximum information density with camera sensors. Not due to limitations of technology, but the nature of light itself and the optics.
That is true--with a single camera.
At some point, we will have so many cameras facing an objective that you can exceed the limits of individual optics.
You can also try and integrate over time. Complex? Definitely. Not an exact science because you will have to estimate the motion of both the eyeball and the subject you are trying to identify? Yes (in most cases). Feasible? In some cases: probably.
And that, too, will get better with better hardware. At first, you may only be able to detect hair color and rough hair length in cases where the person being identified has a nice silhouette against an evenly lit sky, but that already is something.
There is also the diffraction limit[0] which, until materials are developed which have negative refractive indexes for visible light, imposes a maximum on the available resolution. Of course, that's for a single capture source... photosynth + Blade Runner will likely get us a bit further.
The discussion about physical limits and technological difficulties is interesting, but not ultimately relevant for the surveillance question. Even if this technique becomes feasible for use in the real world, it'll still be more expensive than just putting up a second camera pointing the other way.
And there is "super-resolution" which combines multiple low-res pictures (e.g. frames of a video) into a single high resolution image: http://vimeo.com/6608238
Intersperse those on that wall with tiny pods of directional LEDs. Think "lenticular 3D" but with hemispheres instead of half-cylinders.
Remotely connect two of these walls. Result is a virtual 2-way window, with full natural 3D viewing without headgear. Could revolutionize teleconferencing: participant(s) are "just" on the other side of a glass wall...with the "other side" any distance away. Bandwidth required may be staggering relative to current video, yet is currently manageable.
The point is that the information is clearly there, and with current technology it is possible to extract it under ideal circumstances. It's not to say it's feasible or will be, but it's not hard to imagine sensors becoming advanced enough to capture the required light without using a special lens and artificially illuminate the bystanders.
Wrong, current image sensors have around ~50% quantum efficiency nowadays. [1] That's 1 f-stop from the theoretical maximum, while they're pushing around 10 f-stops above the top-of-the-line mobile phone cam / security cam.
The pace of technology is still limited by physics - if they take out the 2kW monster flash then the lens size needs to be increased to a diameter of several meters, just to maintain the same performance at a distance of 1 meter (!).
You're missing the point. This is not a paper on image acquisition, it is a paper on image processing. Because they began from scratch, they used very favorable image conditions. They are not pushing the idea that their capture format is representative of an application.
We have no idea how hard it would be to recover an image using 50-5% of the light. No one has written that paper. Same for resolution. The paper does not claim to address that question so it seems silly to critique it for not doing so.
You can just absorb more light from a larger area then. We have microscopes that can see individual cells and telescopes that can see distant galaxies. There is nothing impossible zooming in on someone's eye.
The resolution and light gathering requirements should be pretty easy to analyze on paper to answer the question "will this ever be usable in practice"? The fact that this isn't done could suggest that the answer really is no, so the authors omitted it.
I'm sceptic that it will ever be even close to usable assuming sensor tech wont improve much and we don't reach some breakthrough in optics, i.e. bending light without huge heavy chunks of glass.
This experiment with huge flash guns and short distances is essentially like identifying the moon in an eye reflection and dreaming of doing deep space imaging the same way. The physics just don't allow it.
This is a proof-of-concept. Sure, the conditions are ideal, but this is the first time anyone has made this work at all. Of course they're not going to start with a $20 digicam and a dimly-lit room.
you'll notice that scientific merit doesn't show up in there. The official policy is that if the methods and analysis are sound, then it will be accepted regardless of how irrelevant the actual study is. This is well known in academia and has resulted in a generally negative opinion of the journal among publishing researchers.
edit: I'd also like to point out that this isn't the case for all PLOS publications.
Yeah, I touched a sensitive nerve. It seems "the future" is interpreted by some as "Moore's law applied to everything", with gross disregard to physics, no thermal or optical or energy limits etc.
Year 2000 has passed, and we're all still waiting for our flying cars.
The reason we don't have flying cars is economics, not physics.
Also, people tend to totally overestimate limits of possibility. We haven't explored a lot of things that are possible with our current level of technology (again, mostly because economics). Moreover, our image processing algorithms are very crude. We're nowhere near efficient use of information encoded in images (in a way a theoretical Bayesian superintelligence would). A lot of things thought impossible become possible when you start throwing more and more "compute" at it. You can't break the laws of physics, but those laws are quite lenient.
As annoying as that is I'm still comforted by the reminder of the (plausible) Blade Runner scene and the thought of how much I like that film - and it's never ending relevancy.
I had always thought it was some sort of technology that could figure out what was there due to how the light reflecting off that object reflected off the visible objects on the scanned image.
I still don't know what happens in that scene, which is strange for a movie with such heavy handed exposition; even the version without narration.
GET IT - SHES A REPLICANT! SO IS HE AND HER AND HIM! OH THE OWL IS FAKE. DID I MENTION THE OWL YET? ITS FAKE!
Yet a weird 6 minute sequence with a photo editor just goes unexplained. He magically sees something we don't using technology that isn't remotely explained.
I always thought the point was that he spots a ... tattoo? Of a snake? That reminds him of the club? Or the dancer... I'm going to have to watch it again aren't I?
Yes in the 3 minute scene there were multiple indications from the picture to indicate: Chinatown (food/texts), the girls outfit (being a dancer), and an identifying tattoo. Which would point to only a few dance clubs in the city.
It was also a connection to the artificial snake scale being found in the bath tub that happened to be rare and only created by a few possible engineers. Engineers who work on robotics. One Chinatown engineer happened to be employed by the people he was looking for, who were robots in need of an engineer...
Not a far fetched connection by Hollywood's standards.
I get the point, but its not obvious how he sees things in that photo by what looks like, looking behind a door or around a corner. I guess he does some kind of perspective shift that shouldn't be possible with a plain-jane 2D photo. Maybe the idea is that futuristic cameras capture multiple angles and his tool allows him to see it?
Tell HN: When submitting scientific research it would be very helpful to consider that some people do not readily understand the abstract and the importance of the findings. A brief description in laymen's terms would be appreciated.
I don't know why this was downvoted. I find it relevant for submissions like this, that there are people voicing that they'd appreciate ELI5-like summary which I don't think is hard to reason for, nor can I see how it could be a problem if someone provided one.
I wonder if this concept (corneal reflection photography) can be used for an eye-tracking interface for computers. Can you match the segment of the image of the corneal reflection that is above the pupil with what is on the screen to figure out where/at what the user is looking?
It's good to understand what "resolution" actually means when considering these types of problems. Suppose you have 2 dots close together. The ability of your camera to "see" (resolve) 2 dots as opposed to a single blob is a function of the sensor and the lens.
All the sensor resulution in the world is not going to let you resolve something if the lens has already blurred the image.
While sensors have become relatively cheap, optics have not. I think physics alone makes it impossible to have a camera phone lens that resolves anywhere near what these XX megapixel image sensors could theoritically resolve.
Good news: At least we can still trust that normaly surveillance cameras won't have the kind of resolution to perform this feat.