Hacker News new | past | comments | ask | show | jobs | submit login
Reflected hidden faces in photographs revealed in pupil (kurzweilai.net)
138 points by ca98am79 on Dec 27, 2013 | hide | past | favorite | 112 comments



>In addition, “in accordance with Hendy’s Law (a derivative of Moore’s Law), pixel count per dollar for digital cameras has been doubling approximately every twelve months. This trajectory implies that mobile phones could soon carry >39 megapixel cameras routinely.”

Ah Moore's Law. It explains everything. Double every twelve months, eh? Hmm, quick math. Facebook has been around for 9 years... oh yea, that explains why Facebook allows you to upload full resolution 60MP photos. /s

>What do your Instagram and Facebook photos reveal?

Nothing. They're about 100 times too small for this technique. And they probably always will be.


One thing I've always thought lost in the discussion of increased digicam pixel count is that at a point you gain no real useful information in those pixels, because you've crossed the threshold of whether vibration allows resolution at that level at all.

Sure more pixels usually mean slightly more information, and there's obviously techniques: stability-enhancement, faster shots etc, that change this critical point. But for the average person carrying a camera: we crossed the critical point long ago and now just waste a lot of SSD space.


The vibration is not as significant an issue as diffraction is. Modern 18mp APS-C sensors already hit their diffraction limits at around f/11. Increased sensor density will push this down to 'reasonable' f-stops, f/8 being the real cut-off point.


Could you please explain what diffraction is in simple terms, or point me to a source where it's explained? I have a grasp on what diffraction is, but I don't understand the role it plays in this.


The smaller the aperture, the more diffracted incoming light rays. Once the 'circle of confusion' or airy disc exceeds the size of a single pixel on the sensor, you begin to lose lens sharpness. If you imagine two strips of paper, one green, one blue, directly adjacent. Now imagine photons from either side of the adjacent edge passing through a lens which slightly diffracts them.

This results in an obvious merging of the blue/green strips. The more the diffraction the more they merge, the less sharp the images become.

Current APS-C sensors at 18 megapixels have pixels small enough that an aperture of around f/11 is the minimum size before this effect begins. As sensors get more and more dense, the required aperture size gets larger and larger. F/8 is considered a 'normal' aperture and once sensors hit that density then the payoff from increased resolution is significantly less and can impact image quality adversely.

This is why science missions use 2mp CCDs that they know the characteristics of, rather than some 40 megapixel phone sensor.

edit: I've just realised I've typed all this out with the wrong idea. You wanted a simple explanation of diffraction. Imagine a water tank with waves being generated from a source at one end. Put a wall with a narrow hole in the middle of the tank. If the wavelength of the waves is significantly less than the width of the hole, they will for the most part pass through unaltered. As the hole gets smaller, more significant changes to the waves occur. They begin to 'spread out' or 'diffract'. An intuitive explanation is difficult but this is a practical one that makes sense to people.


Science missions use 2mp CCDs mostly because of the sensitivity. The larger is the individual sensor ("pixel") the more photons it can absorb thus more electrical potential it can generate compared to the noise level.

There's also technological barriers for creating large megapixel CCDs as opposed to CMOS. Large CCDs are usually created by "stitching" multiple sensors together, which increase their price as a square of the size.

You can shoot fullframe (35mm) camera at ISO 25600 and get decent results, but if you crank up signal amplification on iPhone to 25600 you'll get nothing but random color dots :-)


> Science missions use 2mp CCDs mostly because of the sensitivity

It would be foolish to suggest there was a single motivator for inclusion of a sensor. In reality it is a combination of our answers. Large sensors allows lower gain and better noise performance, enables smaller aperture use at acceptable sharpness, takes less resources to process, transmit etc.

You're not wrong in mentioning it though. Although FF at 25600 is going to be piss poor even if it's Nikon / Sony.

edit: corrected stupid misspelling


You've got very high standards in image quality :-)

Nikon D600 at 25600: http://www.imaging-resource.com/PRODS/nikon-d600/nikon-d600G...

Not even close to "piss poor" in my book.


That's about the noise level and sharpness I have on my EOS 40D with ISO 1600. I think I'm going to cry in a corner a little.

That being said, you don't go that high unless you're very desperate and the sensor itself can go up to 6400, the rest is software.


Sample image if needed: http://fc07.deviantart.net/fs71/f/2010/032/f/e/City_Lights_b...

That was f/32. The rings around the street lamps are diffraction artifacts.


Thank you, for both explanations, that was very informative.


Simple layman's explanation (from a layman): lenses spread light out a bit as the light passes through, so the red wavelengths hit the sensor in a slightly different spot than the blue wavelengths. You need more expensive lenses to reduce this effect, but you're not going to get those in consumer-grade mass-market 1/8"-across lenses.

As a result, higher pixel densities will cause light from a single point to be spread across multiple pixels, with different wavelengths going to different pixels. Software can probably reconstruct the image somewhat, but I'm sure you'll hit a limit where there is too much interference to get the image any sharper no matter how many pixels you squeeze in.


> Simple layman's explanation (from a layman): lenses spread light out a bit as the light passes through, so the red wavelengths hit the sensor in a slightly different spot than the blue wavelengths.

What you've described is chromatic aberration.

Diffraction refers to the spreading of waves around an obstacle -- such as the aperture blades in your camera. No lens need be involved -- you can get diffraction with a pinhole camera.



there is no simple explanation, or rather the simple explanation is your photos will be slightly blurry, from interference patterns due to the mysterious world of quantum mechanics. If you want a better explanation, ask a photon to make up its mind whether it's a wave or a particle.


It's the photon shot noise.

The higher your resolution compared to your aperture, the less photons you get per pixel.

Sensor size doesn't matter at all.


More advanced materials with lower noise characteristics can theoretically produce perfectly reasonable images from single photon impacts.

Diffraction cannot be solved as easily.


Photon shot noise (http://en.wikipedia.org/wiki/Shot_noise#Optics) is as fundamental as diffraction - it's a property of light, not a property of the sensor.

If your sensor counts a mean of 100 photons per pixel, then you'll see shot noise with standard deviation of 10 photons in each of those, for a signal-to-noise ratio of 10. If you quadruple your pixel size and now measure 400 photons per pixel, then your SNR goes up to 20.

This is why bigger sensors (more captured photons for same light level and exposure time) are fundamentally better at image capture.


Ok this is getting pretty convoluted. I didn't mean to imply that photon noise was not a real thing, just that diffraction is a more direct concern. Photon noise can be amortised by a number of different processes which aren't applicable to the case I was trying illustrate. You can't amortise diffraction errors by longer exposure or better photon capture. Pixel size is a primary concern regardless of its noise characteristics.


Think about shooting with a shotgun at a target. If you have a thousand pellets, the distribution of pellets at the target will be quite smooth. Now, shoot through a narrow aperture that takes out 99% of the pellets. Now you have ten pellets. The distribution can not be smooth. The sampling frequency (pixel size) at the target does not matter much. It is the aperture size.

I realize that is a very flawed analogy but it's the quickest I could come up with.

Here's more: http://en.wikipedia.org/wiki/Shot_noise#Optics


To get such less noise you not only need to utilize advanced materials, but also cool down the sensor considerably. To register single photon impacts you'll need to cool down to 0K I guess :-)


Not at all, just look at the difference in the current sensors on the market. There's a wide range of performances due to engineering restrictions. Only the pixel size and aperture size are really relevant to diffraction so it occupies a different class of problems.


Even if you had 100% efficient sensors, photon shot noise would still be an issue. You are sampling the photons emitted by the target. After having reasonably efficient sensors, it's a baseline you can't go under.


I'm not sure about that minimum, but I do know noise (dark current signal non-uniformity for those in the know) increases considerably with temperature. So yes it matters.


FWIW, the rods in your own eye's retina are able to register single photon impacts.


This doesn't put a limit on the resolution, though. It just means you need better optics, or larger sensors. The problem is that these cameras generally have inferior optics with a resolution much worse than what the pixel size would indicate.


You're mistaken, the optics are irrelevant. It is the existence of an aperture through which the rays must pass that causes diffraction. You are correct in that resolution does not matter so much as density, but practical density is already hit and sensors are still quite large.


well with a bit more exposure and a motion sensor can't you get rid of the vibration effect to some extent? I have a camera with a 24x zoom that can take relatively clear pics despite my hands shaking like crazy (when enhanced 24x at least)


> Ah Moore's Law. It explains everything.

He never said that Moore's Law explains everything. He simply said that Hendy's Law explains why phones will soon has 39-megapixel cameras. Since the Nokia Lumia 1020 is available today with a 41-megapixel sensor, I do not find this to be farfetched.

Facebook and Instagram will depend not on Moore's Law, but on availability of bandwidth and cost of storage. To argue against Moore's Law for Facebook is to argue against a strawman. The article did not make that connection -- you did.

In 2005, would you have predicted that Youtube would never carry 1080p videos? I mean, back then, Blu-ray and HD-DVD were still battling it out for dominance of the next-generation disc market.


>He never said that Moore's Law explains everything.

I was referring to the fact this is on kurzweilai.net and Kurzweil tends to think it does explain everything: how we're going to beat the turing test in 16 years, how he's going to resurrect his dead father, etc.

From the subtitle to the last paragraph, the article clearly implies that this technique will soon be applied to social networking sites. And it seems to use Moore's law to handwave the details.


> And they probably always will be.

Famous last words when predicting non-advance of technology.


Yup, you're right. I'm lacking imagination.

Please help me understand a future where social networking sites will maintain and display ~ 6200px X 6200px photos of your selfies. Outside of genetically engineering new eyeballs, I'm struggling here.


I was at an Apple store yesterday, just behind a teen who was puzzled why her phone was out of storage space. Turned out she had 9 gigabytes of texts stored - dominated by full-resolution pictures. So yeah, as we're moving into 10-40 GP imagers on phones, people are casually and regularly sending around enough data to start including a lot more visual information than they expect. Social media sites may take a while to catch up, but "texting" is already hitting that resolution.


FB doesn't need to upgrade to extra high-resolution in the news feeds -- a waste of bandwidth and time. They just need to make original uploads available via the API.

I think it's safe to assume that FB has tried to keep the original version of every photo ever uploaded. A few years ago, YouTube retroactively upgraded uploaded videos to HD. It's also safe to assume that they're going to continue upgrading/evolving/extending their most important feature, photos, and try to make it the #1 place to organize, store, and share all your photos.

Facebook then becomes an API upgrade away from exposing the original versions of uploaded photos. The photos will also come with all of your friend's faces conveniently tagged, allowing an app with the right permissions to extract high resolution faces/pupils/reflections. None of this needs to happen in the news stream.


I'm not sure, but I think FB mobile does some sort of downgrade before the photo is uploaded to the server. I've had pictures upload to Facebook from my phone in a very short period of time, shorter than would be required for a full res upload.

So they may not have full resolution versions from the mobile, which is likely their primary source of pictures.


Yes, I think this is correct. They're still prioritizing performance first right now. But when the time is right, they'll probably upgrade that capability as well. Either pushing the full resolution file up, or syncing the full resolution later when you're on wifi.

For a while, the Java upload tool was knocking down resolution before upload on desktop, too. That was a few years ago, I think. 2008-09?


Ubiquitous fiber networks and a single order of magnitude increase in the capacity of storage media would make it "free." As for why: bigger is better?

I'm not saying you're wrong. But look at it this way: you are predicting that the resolution of stored photos won't increase by another power of ten. If you are right, this would be the first time in tech history that such a prediction would be correct.


While I am skeptical that every technology progresses in lockstep and indefinitely: storage capacity; storage speeds; network speeds; consumer devices; I'm actually objecting more to "bigger is better." It seems that isn't necessarily true. Especially when it's a photo of your face.

Also, just a nitpick. I think relevant factor is ~100, the difference between the current resolution maintained by most websites and the proposed one for this technique to start becoming feasible.

[Ok, 20 may be right. I didn't know they allowed 2000x1000]


Facebook will store 2000x1???, and the technique proposed here requires 6000x6???. That's a factor of about 20, not 100.

Personally, I will enjoy seeing high res pictures of other people on my 4k monitor when I get one.

Again, not saying I'm 100% that you're wrong, but you're making a bold and unprecedented prediction.


Wasn't all that long ago that 320x200 images were considered astoundingly good and sufficient. The "zoom image" scene in Blade Runner was wishful thinking; now we're discussing its imminent reality.


It also wasn't long ago that we thought we would have flying cars and supersonic airliners. Physics is a cruel master.


I thought this was a discussion about Facebook serving higher resolution digital photos - why is physics relevant here?


I think the point is, if smartphone cameras eventually get to say 40MP then people will start uploading at that resolution even if there's absolutely no benefit from it. The image may then be scaled down to be shown on the page, but FB can keep the original.

it would be cool to make group pictures, though.


Isn't one of Nokia/Microsoft's phones, currently on market, at 42 megapixels?


It is; and the PureView 808 (Symbian) has had that before. But even though the full-resolution shot is saved for post-processing you usually get a 5- or 8MP shot that's your actual final image.


Just a few days ago, somebody posted a link on HN for a 4K 24" monitor. Right now, I'm typing on a computer hooked up to two monitors with 2560x1440 resolution.

10 years ago, I never would've guessed we'd be up to 120" TVs with 4k resolution. I was pretty happy with my 42", which was a big step up from what my parents had used before.

Right now, those examples I mentioned are new and expensive, but it won't be another 10 years before they come down in price and Sony and Samsung will be looking to sell us on another resolution.

Meanwhile, having computers hooked up to your TV is also becoming increasingly popular. My dad hates looking at a tiny computer screen, and would much rather look at photos on a TV.

So you've got high resolution TVs, and low resolution photos. That means the photos are too small or blown up to show their pixelation. That won't fly forever.


As a hobbyist photographer, I can tell you the problem with this will be optics. The discussion here (and the article itself) seems to act as if sensor pixel density is the primary limiting factor on resolution at this point - and it's not. [1] is an article that has a good and through discussion on sensors out-resolving lenses.

So, your limiting factor is glass, and manufacturing precision on that end is moving forward much slower than on the sensor.

[1] - http://www.luminous-landscape.com/tutorials/resolution.shtml


> it won't be another 10 years before they come down in price

I wager we'll see 4k tablets under $1k within 2 years, and TVs will have to follow suit (drop in price, increase in quality).


Did you ever click on a photo in facebook and wish you could zoom in and see a larger size (or even the original) in order to see more detail of a specific part? I sure have. The future where they maintain huge images is the future where enough people want to be able to do that and the cost of supporting it is small enough. It really isn't too hard to imagine.


You're also assuming field of view (FOV) stays the same - which it won't. Think about if every picture you took was a panorama - I don't think 18000px x 2135px (to keep the same number of pixels) is unreasonable resolution for a 180° panorama.

Now imagine that lenses for single-shot panoramas are standard on all cameras and cellphones.


haha, yes, your lack of imagination in this regard is tempting to jump all over, but I point you at exhibit A, the past.

I used to think my 50GB disk was massive. Now I have more RAM than that, and use it all!

Photos will increase in size because of many factors, possibly including multiple focus points, or more color information, or 3d or a whole suite of yet to be thought up reasons.

I am very sure social networking sites will store larger files than 6.2k x 6.2k, with the 4K monitors and video and cameras producing larger and larger images (think medium format, even a piddly little smartphone now has 49MP!)


http://www.kickstarter.com/projects/95342015/360o-for-gopro-...

This saves a big resolution picture with spacial info. With your computer then you see a smaller part of the image. 6200x6200 images is not too much.

Add everything that has to do with 3D. Life is 3D, not 2D.


Higher DPI displays for life-size photos?


Okay I'll do the math, if we look at a chart we can find that most human heads fit within the measurements of 6.5 inches wide, and 9.4 inches tall [1]. That fits nicely within the 8x10 inch standard size for photographs that is common, so assuming we display these images at the maximum resolution human eyes can discern [2](somewhere <300 pixels per square inch) that means that in order for you to display a human head life-sized at 'visually flawless' resolution only requires an image with a resolution of 2400x3000 pixels.

I can see the appeal of life-sized photos on social networking sites in the future. Group photos could benefit from being larger than that to include multiple people at life-size, but I'm unsure of how a resolution greater than 2400x3000 would be useful for a profile picture at this point.

[1] http://upload.wikimedia.org/wikipedia/commons/6/61/HeadAnthr...

[2] http://prometheus.med.utah.edu/~bwjones/2010/06/apple-retina...


Two heads side by side on a 30" 300 dpi monitor should be enough to drive need for this tech, then.


Another thought:

A few people are pointing out use cases where a 39MP photo might make sense down the line:

-Group photos

-Head to toe shots

-Panoramas

Let me just point out that this research is done on "high-resolution passport-style photographs." That is a close up head shot with a normal FOV. You fit your whole body, your friends, and everything else in the photo... you're going to need hundreds to thousands of mega pixels for this to work. At the end of the day, it looks like you still need > 39MP just for the face.


It isn't like there aren't photo sharing sites where you can upload the full res version, like Flickr. Flickr offers a free TB of space.


As someone who has taken a lot of pictures, the key to photography is not pixels, but light. And so I had to read the paper:

"The room was flash illuminated by two Bowens DX1000 lamps with dish reflectors, positioned side by side approximately 80 cm behind the camera, and directed upwards to exclude catch light"

Yes, that's the kind of lighting conditions that are routinely present at crime scenes.


Then there are what seems to be the standard-issue security cameras, where it's hard to even tell what the main subject is wearing.


Am I the only one who thought, "so what?", after reading this? A picture of sufficient detail taken under optimal conditions can yield reflected images off of the pupil. Big deal. It has always been the case that in a high resolution image you had more of a chance to see reflected images.

Not that this isn't interesting, but their hypothetical (hostage situation) would have been a much more intriguing test case. But, since those pictures/videos are rarely taken with super high resolution cameras with good lighting, it's unlikely this technique would be helpful.

Edit: Just to be clear, I think anything new in the form of research and development is generally good. Maybe this work will be built upon to create something great. I just don't like the implied promise that pieces like this make. It oversells what technology can actually do and is, I think, misleading.



Combine with content aware fill and you get http://www.youtube.com/watch?v=6i3NWKbBaaU


I always thought that CSI was having a laugh with scenes like this: http://www.youtube.com/watch?v=53ilsuJOTG8



Haha! "Uncrop" amazing! I should really watch Red Dwarf.


In a photograph pasted into a Word document and then cropped within it, you often can uncrop, because the crop is non-destructive.


Heh - good point! And then there's the "redacted" bits in government PDF that turned out to be nothing more than "Kennedy assassination carried out as requested by Agent F. Castro" followed by "rectangle(colour = black, position = last 9 characters of line)". All you need to do is edit the PDF and you get all the documents you could want. I love it when faith in technology fails in informative ways...


Did you watch the video? He uncropped a picture which had been printed as a cropped photo.


Not trying to legitimize what is obviously satire, but you could possibly uncrop a printed photo, if the original photo was online, and if it was indexed by Google. Reverse image search win.


This is probably the best/most relevant CSI example: http://www.youtube.com/watch?v=3uoM5kfZIQ0


The relevant mocking image - "number plate enhance" http://imgur.com/uhvWKS8


Well, there is a tiny bit of a difference in image detail between CCTV and the 39 MP camera used here.

So CSI has no excuses still.


Maybe the whole CSI franchise is secretly near-future science fiction?


Zoom... Enhance!


This work is not entirely new. Here is another recent article, where it is explained how input to a smartphone/tablet can be reconstructed by analysing reflections of the screen in various nearby surfaces, or the user's pupil.

http://dl.acm.org/citation.cfm?id=2516709


How cool. Careful, though, Neal Stephenson is probably lurking here collecting items like this for his next book.


In maybe 20 years or so this sort of shit is going to be scary.

Imagine what happens when we have several orders of magnitude more computing power available. A lot of things aren't possible today because they are too computationally intensive, but computing power is just getting cheaper and more abundant. Imagine what happens when it's feasible to process basically all data on the entire internet. Every photo and video. Every tweet, vine, and wall post. Every porn video and macro image.

It'll be possible to find the identity of anyone in any picture. Moreover, every picture or video that someone has been in will be easy to correlate. Now scale that up to every activity. Anyone with enough computing power will basically be able to create a dossier on anyone, or on everyone.

The implications are concerning.


Why 20 years?

I'd assume that most people have been facetagged in enough photos on Facebook so that they can be found in every photo by every random person who uploaded there, or to a few other cloud services, which any reasonably determined organization can scrape. People photograph a lot.

And then the CC cameras as well.

Though there are easier ways to spy on people, you can get all relevant info from their phone apps and social media services anyway.


20 years is just a SWAG for when computational resources will be so abundant that the scariest level of analysis will be routine. Today it's simply not practical to determine automatically everyone appearing in a given random photo or video. Facebook vastly simplifies the problem (by narrowing the search scope tremendously), but there's a lot of content elsewhere.

For example, consider identifying people by the way they walk, the clothes they tend to wear, their body build, etc. That sort of thing makes witness protection efforts practically useless, for example. The sheer amount and detail of information that could become available will enable things that we can't even remotely imagine today, but ultimately the biggest change will be loss of anonymity at every level and loss of privacy.


There are hard limits in terms of energy/thermal efficiency to computation. I don't think you will be able to process all the information on the internet - even assuming that people use ridiculously high resolution images regularly.


Consider that all of the video on the internet has been processed multiple times already, in order to encode it and decode it upon viewing. The hardware to do so is incredibly distributed though, but nevertheless that hardware has already been manufactured and put to use. The current youtube infrastructure already processes encoding/uploading video at more than 6,000x real-time. With significantly more powerful computers it will be possible to use a similar investment in hardware to not just encode but also analyze and correlate video at thousands of times real-time. What becomes possible 10, 20, or 50 years into the future?

If you can analyze video at, say, 10,000x real-time you can crunch a billion hours of video in a decade. Imagine the value of the meta-data you could get from such an analysis. You would have incredibly fine-grained details on people's lives, especially as video use becomes more common. Being able to see hidden details in reflections is immaterial compared to what will be possible. With enough computational resources it'll be possible to create dossiers on everyone everywhere. You'd know where they live, who their friends are, what they do for fun, what they own, what they wear, what they read. Using writing analysis you could find out how politically active they are, what their opinions are, and so on. Given that abundant individual liberty is still a fairly rare, and constantly endangered, condition on Earth it's frightening to imagine tools that could give such information in the hands of unscrupulous folks.


Billion's a tricky number, because old billion and new billion are a few orders of magnitude away from each other. Going with the thousand million def and dividing by ten to get a year:

100,000,000 / (24 * 365) = 11,415

313 million people in the US. Computing clusters of similar capability required to observe them all:

313,000,000 / (100,000,000 / (24 * 365)) = 27,418

Granted, better or worse depending on how much you want to look at of their lives. If people only film important events is that better or worse for you? If they film important events how does that shift the balance of evidence from what the video contains as compared to the mere fact that they filmed the event?

I think... I fear that in many ways what you're talking about - video processing aside - is already here. If I put my evil hat on for a minute:

Okay, we want to work out what someone buys, where she lives, what she wears, who she knows, where she goes...

Where's that information already written down?

• If she went into a store her phone probably pinged the wifi, whether or not she connected:

http://www.tomsguide.com/us/Wi-Fi-tracking-Nordstrom-Cisco-R...

• ANPR systems are already widely deployed, so that's anywhere she went in a car:

http://www.projectcensored.org/police-tracking-license-plate...

• Credit card companies keep records of your purchases, obviously.

• There, probably, are going to be pictures of her on facebook.

• Facebook can be easily mined for relationship details - most people don't set their pages to hide their friends lists, despite the fact they should.

• Your IP address and browser fingerprint is trivially trackable.

https://panopticlick.eff.org/

For comparison you need 32.7 bits to pick someone out of the world population. Doesn't make you uniquely trackable of itself but puts you in a fairly small group.

... And the topic and language someone chooses to write about online... well, I agree but I don't think you need particularly powerful computers to do that - and given that I suspect there's more powerful evidence out there I'd expect that only to be deployed against individuals of interest.

#

I don't think the big threat to privacy is more powerful computers. Though they certainly don't help, I think the damage in that sense was done with the first ANPR system. I think the big threat is networked databases that already exist and take much less effort to extract relatively strong evidence out of.

I've also got some reservations as to what you're talking about is comparable in complexity to just encoding the video for upload. I agree that that is processing, but the data's already structured in a known way and the transformations performed on it are relatively simple. The question to my mind here really does make Moores law the important thing. I know that youtube does try to check for copyrighted content - but equally they're not particularly good at it even there.

Unfortunately I can't really visualise exactly what you're talking about in my head in terms of what you want to do to the data - face-recognition would presumably be part of it, as would voice to text, and then you'd want to check that against databases of stored signatures. Possibly you could make the task vastly less complex by using other data around it to limit your database queries, but still. That's not a simple thing. Against individuals of interest, I could totally see it being done, but everyone seems very wasteful - I'm not sure how the small amounts of evidence I think you'd get would justify the costs unless costs became very low.

If we're looking at something like a 20 magnitude increase in computing power over the next fifty years or so, (and similar increases in access speed and so on,) then all those concerns just sort of vanish away into the sea of massive power. ^_^


I expect big things in the coming decades coming out of computer vision & image processing. I wouldn't be surprised if algorithms to 'zoom & enhance' images reach the levels we laugh about in movies today.

I'm not well read on the topic so I might be talking out of my ear, but this is the sort of thing I'm anticipating getting more and more powerful over time: http://users.soe.ucsc.edu/~milanfar/talks/milanfar.pdf (jump to page 23 if you want to see demos of the algorithm)

http://www.scriptol.com/programming/graphic-algorithms.php

edit: grammar


Noise and lack of resolution are different beasts. That paper has nothing to do with "zoom & enhance", which will never reach the level we see in movies, since the information is just not there.


Yet both noise-reduction and resolution-enhancement are both fundamentally about taking whatever information you do have and using that to interpolate/infer information that has been lost (or was never captured).

Especially in certain domains such as written letters (e.g. license plates) or faces that have a well-known structure with variations in known dimensions, I wouldn't be surprised to see algorithms that can probabilistically infer what information would have originally been there. Of course you'd never reconstruct an entire face from a single pixel, but how far can we go? I dunno - but my hunch is pretty far


And of course if we look at recent history we see where that technique goes terribly wrong (Xerox document archival compression).


That's not entirely true. Images are highly redundant and have a lot of structure to them. it might be possible to reconstruct the image a decent amount from that. Obviously a lot of the information and all small details would be missing, but you could make it more recognizable.


> I wouldn't be surprised if algorithms to 'zoom & enhance' images don't reach the levels we laugh about in movies today.

wouldn't or would ? don't or do ?


The cameras used have 39 megapixels. I wonder how well it works with Nokia's PureView cameras - or the lower quality lenses on iPhones & Androids?


> Although Jenkins did the study with a high-resolution (39 megapixels) Hasselblad camera,...

Medium format cameras have a much greater area sensor and probably overall better quality ($2,000 vs $20,000 or even $50,000 medium format camera) compared to a regular digital DSLR camera. Even a quick Google seemed to indicate photographers prefer the larger sensor size even if it has fewer pixels.

The article mentions "face images retrieved from eye reflections need not be of high quality in order to be identifiable" but even so there must be a massive difference in picture quality compared to a Hasselblad medium format camera sensor.

Still interesting though it's as if the goofy CSI or Blade Runner "Enhance!" has come true.


And now i'm wondering whether you could create an app which detected faces reflected in images and scrambled them... though with eyes that might destroy the way the original face looks in some creepy way. Of course you'd have to strip all the EXIF data as well including any internal thumbnails which would be of terrible quality anyway...


> It would be interesting to see what hidden information is buried in law-enforcement (and other) photo archives — some of which could even help exculpate innocent persons.

Yeah, it would be interesting. But would you hold your breath for it? In an age where kids "shoot themselves" while being handcuffed and whatnot, I for one wouldn't.


Given the Internet has millions/billions of photos on it yet the authors chose not to do a simple random sample, at all, I'm guessing this is more setting up everything exactly correct to create the answer they want.

This is not how one normally does science.


EXIF-data is a much more realistic threat.

EDIT: Oh and camera-charecterising noise signatures.


Do you have any resources on fingerprinting cameras based on noise? Seems hard to do outside a controlled environment (as noise is so heavily influenced by environmental factors).


This reminds me of dual photography [1], in which you trade many projectors and one camera for one projector and many cameras using time reversal symmetry.

The video [2] gives a good explanation, and the revelation of the playing card is what strikes me as similar to revealing reflected hidden faces.

[1] http://graphics.stanford.edu/papers/dual_photography/

[2] https://www.youtube.com/watch?v=p5_tpq5ejFQ


The hard part would be identifying the person in the photo, if you didn't already have a list of suspects. Facial recognition isn't that great, but perhaps we could make only partially automated (i.e. the human points out where the nose is and stuff like that.) Then use some kind of bayesian probability to narrow it down (20% of the population has this facial feature, calculating the probability distribution of different faces/facial features producing the image being analyzed, etc.)


Wouldn't it be interesting if we were able to capture so much information in a photograph that you could examine the pupils of the people that appear in pupils, etc.?


Shades of "Blade Runner".

And, I'm now waiting for someone to release a "Privacy Eye" Photoshop filter (a la the ubiquitous "red eye" filter).


Remember when they called it "exponential growth" instead of "a derivative of Moore’s Law"?


This is a long-time favorite effect of anime studios: http://tvtropes.org/pmwiki/pmwiki.php/Main/ReflectiveEyes


It's especially effective when the dreamy music is cued and the light colors get all blurry


This is hardly new. I've been looking at the background of people's reflected pupils in posters (think of the big posters at hair salons and stores) for many years.


Wow! I had no idea this could be done. The implications of this technique for forensics work are intriguing.

Is this a new idea, or have people been working on this for a while?


The idea of examining a subject's pupil has been around for years. I've used a similar technique to reverse engineer lighting setups for portraits. See my friend Mike's portrait of one of his children:

http://www.flickr.com/photos/oatmeal2000/8545400550/sizes/o/

You can clearly see that he uses one umbrella to the right of the camera. With higher resolution images and better focus you can sometimes determine the brand of lighting they use, but that's only if it has a very distinctive shape.


This 2004 paper got a lot of attention in the computer vision community doing essentially the same thing about a decade ago:

http://www.cs.columbia.edu/CAVE/projects/world_eye/

"The World in an Eye," K. Nishino and S.K. Nayar, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol.I, pp.444-451, Jun, 2004.


There was a related paper on using the eyes as a means of capturing a full environment map which could be used directly for relighting a scene (e.g., for 3d rendered scenes or for compositing digital objects into scenes):

http://www.cs.columbia.edu/CAVE/projects/eyes_relight/


Well it is in many crime movies except that our megapixels were not enough for such uses. Now with high resolution images it is relatively easier to do. I'm not surprised by it. It just is another use of tech.


That picture of Obama is 18x25, not 16x20.


and the reflection is off the cornea not the pupil but who's paying attention to details...


This works on things like Christmas ornaments as well...





Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: