Hacker News new | past | comments | ask | show | jobs | submit login
Lab Notebook: Single Pixel Camera (gperco.com)
167 points by mmastrac on Oct 20, 2014 | hide | past | favorite | 45 comments



This vividly illustrates the evolutionary argument about how even a very primitive eye is a huge survival advantage.


Yes. Pretty much any kind of sensor, your brain can figure out how to make use of it. One big example I can think of is humans learning to use echolocation, similar to what bats do: http://en.wikipedia.org/wiki/Human_echolocation

Another interesting one is this bracelet which vibrates depending on which direction the wearer is facing. http://sensebridge.net/projects/northpaw/ Research showed that people developed a natural feel for the sense over time.

I bet there are other examples, and this may become an important area of human-technology evolution in the future!


Check out BrainPort and Orpyx


If NorthPaw was half the size (and didn't make me look like an escapee) I'd buy one in an instant.


The digital camera on the Viking Mars lander didn't use a 2d sensor either... it's amazing what some people can do with engineering constraints! http://en.wikipedia.org/wiki/Viking_program#Camera.2FImaging...


Interesting introduction to the Radon transform. Before it went that direction, I was feeling certain it would involve a long tube with the sensor at the end of and an accelerometer to kind of "paint" the scene by hand or possibly use servos to scan it. That is a fairly stereotypical old deep space probe way of doing things.


(I'm the author) That's actually how this project started, it was basically a pinhole camera mounted on a pan/tilt mount.


I've never used this in any practical way, but would any of the techniques from Compressed Sensing help at all? http://en.wikipedia.org/wiki/Compressed_sensing


I'm not the author, but it seems like you're certainly right. Here's an example from the wiki page about how a single-pixel camera utilized notions of compressed sensing: http://en.wikipedia.org/wiki/Compressed_sensing#Photography

I guess applying compressed sensing techniques would mean making assumptions about your image which allow you to fill in some of the blanks created by the incomplete angle and position coverage (i.e. the fact that the system is underdetermined). If you could somehow assume a certain redundancy, you could probably make guesses about what goes in the spots that didn't get well-covered.


I doubt that compressed sensing will be very effective for this problem. Many of the guarantees for compressive signal recovery require incoherence between measurements- meaning each measurement is sufficiently different from the others. In this case, each measurement is highly correlated with the others.

The Rice single pixel camera (discussed on the wiki page) effectively multiplies the image by a random mask before it is sensed by the photodiode. This is how they control the incoherence property.


You might technically correct in the limited sense that this isn't a 'compressive' sensing - but it's an absolutely classical example of sparse reconstruction. You can resample that radon transformed image randomly and reconstruct it as a sparse fourier image. There is matlab code out there if you google around a bit.


Compressive Sensing using the Radon transform associated with radar has extensive literature. Not sure about the OPs imager. The optimization techniques are applicable to most sparse signal reconstruction transforms. Formally you would look to show the Restricted Isometery Property. http://users.ece.gatech.edu/~justin/ECE-8823a-Spring-2011/Re...


It's a dense sensing of the radon transform - hence its not compressive. But you can resample that lots of ways and satisfy the RIP. Reconstruction of images from sparse radon transforms is one of the early examples that helped shape the field.


Sure- admittedly, confusing "sparse reconstruction" with "compressed sensing" is a pet peeve of mine.


Having 50 GHz Wi-Fi arriving, the same technique could be applied to visualize the WiFi router signals: an antenna is the sensor, and a metal plate will serve as an arm.


Incredibly inventive and well executed. Seeing your writeup go from this wacky hand-waving theory to a built camera to actual results is awfully rewarding.


A good first step to any potentially crazy idea is to simulate it on a computer. If it doesn't work there, either your simulation is crap or the idea is bad. If it does work, either your simulation is crap or the idea might work in the real world.

Cool writing style, too!


I thought it was going to talk about implementing Compressed Sensing (Compressed Imaging). If you haven't heard about it, you should look it up. It's pretty neat.


I have spent the last year making a camera that implements exactly a single pixel camera that relies on compressed sensing.

The motivation behind this project is that a professor I am working with had snow melting on his roof. He looked into thermal cameras but found them prohibitively expensive at $4k-$40k. So, we decided to build a cheap camera that only used a single thermal sensor and take the picture as fast as possible.

This camera relies on the fact that many real world images are of the same color (e.g., the sky is mostly blue). It looks for the areas that contain the most detail, the edges, through something called the wavelet transform.


This project sounds very interesting! I'd also love to hear more about any details you make available.

Thanks for the reference to the wavelet transform--it is nice to consider a concrete context where this operation can be employed.


Trouble with single pixel thermal imaging is inertia - thermopiles have thermal mass, you cant take 1000 readings per second, you cant even take 100 readings per second. So while 1 pixel camera in visible light spectrum can work thanks to hi speed acquisition, thermal one will be beyond useless.

not to mention the wave of extremely cheap thermal imagers we got this year -Flir one, seek thermal, all in ~$200 price range.


I'm inclined to believe the FLIR One does not use "true" infrared. There's a portion of the visible spectrum that is nearly IR and it's possible to rely on some IR reflectivity phenomena. I looked at some other FLIR thermal cameras and found them to be much more expensive; thousands of dollars.

However, there's no doubt a small mobile platform is very usable. The primary goal behind this work was not to make a product: it was to provide a proof of concept. We were interested in the image processing and showing that some heavy lifting could be done on a small mobile platform (the Raspberry Pi).

You are correct about the acquisition rate. However, the acquisition rate was more limited by the motors to move the sensor into place.


Flir one is a proper uncooled microbolometer, Flir finally decided to innovate, use modern manufacturing processes/methods and build something cheap to make.

Btw their $4000 E8 thermal camera has VERY SAME components inside (from 320x240 bolometer down to firmware) as $950 E4, only difference is digitally signed config file (that used to be hackable)

Your platform might of been limited by motor speed, but once you reach 10-20 reads per second you will hit a brick wall of thermopile inertia and that will be the end.


Did you make a webpage about the project? I'd like to learn more about your progress.


I do not have a webpage; it's not public yet. This work will be published in February and I'll probably write a corresponding blog[1] post similar to OP's. I'll try and distill the mathematics down while telling a story.

[1]:http://scottsievert.com


Looking forward to it! I'll subscribe on newsblur.


This is the similar project built on top of compressive sensing:

http://dsp.rice.edu/cscamera


We founded InView Technology with a few of the professors heading up Compressive Sensing research at Rice. We make Short Wave Infrared cameras with these techniques utilizing DLP chips. We also sell research platforms.

Our website is a bit thin but we have experience with imaging at XGA and 1080p. The spatial-temporal dimensions of the data with this sensor are quite different from pin-hole FPA imagers. You can trade off spatial resolution and temporal resolution with more freedom than readout ICs. You can also skip the image and go straight to information measurements for pattern/event matching.


Somewhat similar to the early work on televisions:

http://en.wikipedia.org/wiki/Mechanical_television

One of our undergrad elec courses had a lab where you built one of these. It's kind of a trip.


From the images of the kit, it doesn't look like there's a lens hood or anything preventing stray light in at odd angles from hitting the sensor and washing out the image with the brightest object in the scene.

I wonder if adding a small black tube or similar optics to the sensor would reduce light bleed dramatically.


The sensitivity of the sensor vs angle was pretty close to cos(angle), so it was not very sensitive to light coming in from extreme angles. But the whole process relies on light from everywhere in the scene hitting the sensor all at once, except for the light being blocked by the arm.


Recognized the results of FBP from working on PET scanners and got far too excited. Very cool stuff!

I saw in the article that you used the mean to correct for brightness and clouds. Was scaling to correct for overlap not needed because you used the full sweep across the frame?


That looks like a lot of fun, I'm going to have to try it. I was thinking he would do it the other way (move a hole around) but I can see this has its benefits as well.


This reminds me of a type of eye that occurs in plankton; it consists of a row of photosensitive cells that moves left and right rapidly.


Here's an entirely different approach:

Take an old LCD screen (small). Take the back out of it. Put a (grayscale) single-pixel sensor in behind it.

Then repeatedly display random hash on the screen and take a measurement of the light intensity. My intuitive guess would be that you'd get the most info with half the pixels on fully and half of them off fully, randomly chosen, but I can't say why.

You should be able to get much the same result as this, without any moving parts. Although you'd have to use an entirely different transformation to get there - it'd reduce to a (massively) underspecified system of linear equations to solve, with noise added in to boot. Have fun with that.


I'm curious about a couple details of the delivered picture versus the reference: how long did the single picture camera take to expose and how much vibration was introduced by its arm's motion?

I suspect some of the sharpness of the one versus the other is related to different exposure times and vibration while imaging, so it's probably doing even better at its job than it seems.


It'd be interesting to see what 12 or 24 frames of an animation look like, something like this: http://upload.wikimedia.org/wikipedia/commons/4/4a/Muybridge...


http://dsp.rice.edu/research/compressive-sensing/single-pixe... From 2007. Details on a different and more refined approach that I thought might be worth comparing.


Excellent job, I like the thought process and the outcome. I think it's worth noting, that the results you're getting are great on their own merits. Like a hand-made pinhole or slit camera, the break from "reality" could be seen as the artifacts of the method.


Very nice!

Cameras based on a single pixel, or a single-row array of pixels, have uses at wavelengths where two-dimensional pixel arrays are impractical.


Have any specific applications off the top of your head? Sounds interesting!


Take snow melting on your roof, meaning a heat leak and wasted cash. You want to buy a thermal camera, but they're expensive at $4k to $40k. A single pixel camera would be fine for you -- you wouldn't mind waiting a couple minutes for it to move the sensor around.


One application is where the "pixel" is a complete IR spectrometer that can perform a chemical analysis.


THz is the big one right now.


Very cool.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: