Hacker News new | past | comments | ask | show | jobs | submit login
Imaging Without Lenses (americanscientist.org)
154 points by Gatsky on Feb 5, 2018 | hide | past | favorite | 25 comments



As I understand, one of the biggest costs of optical telescopes is the mirror, which needs to be meticulously ground to a paraboloid. Ever since I was a child, I always wondered if we could use a semi-cylindrical mirror (which is much easier to fabricate) to focus light into a linear sensor array and achieve the remaining focus computationally. If this is possible, we might be able to build much larger, cheaper versions of the James Webb Telescope.

I discovered 5 years back that cylindrical reflectors are not viable, purely optically: https://physics.stackexchange.com/questions/29853/are-cylind...


Interestingly, vacuum-backed mylar has been used for light 'DIY parabolic mirror' duties. Here's one made out of a trashcan lid:

https://www.youtube.com/watch?v=_8sd9UgjXLE

I'm sure that durabililty would be an issue, and the vacuum would probably leak over time without some significant effort, and it probably wouldn't be very stable under environmental changes, but it's a cool lateral solution.


I can't quite visualize the problem. As long as you make the focal length of the first cylinder equal to the distance to the second cylinder + the distance from there to the sensor, and make the focal length of the second cylinder just the distance to the sensor, then both dimensions should converge at the right point?


I thought so too. But apparently that doesn't give you a _point focus_, rather a square focus, which doesn't work.


Ok, I get it. A circular mirror focuses all directions equally, while two cylindrical ones focus only exactly left-right and exactly up-down correctly, leaving every diagonal out of focus to varying extents.


I think what you describe is similar to the notion of a light-field/plenoptic camera: https://en.wikipedia.org/wiki/Light-field_camera


I'm curious if you could DIY the mask used for example in FlatCam that they mention, it looks a really interesting area.

Apparently for the mask they use "a custom-made chrome-on-quartz photomask that consists of a fused quartz plate". I was wondering if you could make a very poor equivalent through just acetate and printing.

It vaguely reminds me of an old fashioned technique such as:

https://en.wikipedia.org/wiki/Zone_plate


You can actually just place a random diffuser directly on a sensor -- this is work from some colleagues at UC Berkeley which I (surprisingly) didn't see mentioned in this article https://www.medgadget.com/2018/01/diffusercam-lensless-3d-im...

DiffuserCam was previously discussed on HN but I couldn't find the link.


Cheers, I'll have to look more at that. It's neat the source is available too!


This article the first instance I've heard of placing the mask directly on the sensor. I'd always had the impression that the coded aperture would be substituted for the original lens aperture. And the images I've found online seem to indicate that the aperture is macro-scale. This is something which could be imaged on lith film or hard-dot film.

I'd really like to play around with these techniques, but I don't know where to start. The only deconvolver I could find used Mathematica, so I'm not sure how useful making a coded aperture would be if I couldn't see the results.


You can do deconvolution using ImageJ, but getting a model of the PSF for an arbitrary aperture is the tricky part. https://imagej.net/Deconvolution


Would it be as simple as imaging a point with the system and using that as the source for the PSF?


more or less; the flatcam paper has a section on the calibration method (finding the psf)


In this article, they talk about encoded blur.

http://web.media.mit.edu/~raskar/Mask/

About halfway down the page you can see the out-of-focus point source looks identical to the coded aperture.

While not lensless photography, this seems like a good starting point for those interested in these techniques, especially if I can load the PSF into ImageJ.


The title makes it sound almost like we are just ditching lenses. Makes much more sense to say that we have managed to build much more sophisticated sensors.

The part about "computational imaging" rings awkward to me. In a very real sense, lenses are just computation systems. They are crafted to manipulate their inputs in a controlled way. We just now can do more sophisticated computation in other parts of the systems we have built.


Just yesterday someone was telling me about using diffraction for imaging. He also mentioned the work of Laura Waller, who apparently is able to get 3D images of a scene by putting translucent-but-not-transparent tape directly over the lens-free sensor. It seems the idea is that each point source of light has its own caustic pattern.


A talk was given at the last CCC about free electron lasers, and in it the speaker talks about using diffraction to image chemical reactions. It's great watch.

https://media.ccc.de/v/34c3-8832-free_electron_lasers


Why does this sound like the real version of "Zoom. Enhance."


I used to dismiss those obvious Hollywood cliches, but the more I looked into the space professionally, the less silly it sounds. Images-as-data are far richer than images-as-images, it turns out, and clever algorithms can extract a lot more information from them than you would naively expect.

So yeah, you're right!


They are silly though. You can't create data out of thin air and this article even notes multiple times that for more complexity/detail, it requires more data to be ingested at shooting time.


Yes, though you can do better than is commonly claimed:

You can make use of known properties to improve the picture/extraction as you aren't taking pictures of random data.

You can improve resolution with multiple frames, having video gives you a lot more info.

You can extract even with very low resolution in limited cases, getting number plates for example. Very limited possible inputs (one font, known character, limited combinations) mean you can find a closest match more easily.

Of course some TV things are silly, but sometimes they're also far worse than we can actually do.


Any hints where and how to start with this images-as-data?

I had the impression that images are data, hehe


Okay, enhance some more. Okay, zoom in on the license plate and enhance some more.

... and always wait for macho boss to come in and give those instructions because the hackers would never think to do that


An extremely interesting read, but a very long article. Minimum of a 10 minute read.


I believe that astronomers should have very special technics of data visualization, since almost all their data nowadays are non visual and they make stunning images. Sure some images are made just to become stunning, but they probably have technics to gather insights. These could be used in other high dimensionality scenarios.

Do you know some good references about astronomical data visualization?




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: