That talks about putting a finger right on the camera. You don't actually have to go anywhere near the camera to measure heart rate. See the iPhone app Cardiio [1] which gets your heart rate by looking at your face.
The technique behind this, Eulerian Video Magnification, is pretty cool. There is a description and a very nifty video at [2].
Great article. I just cannot contain my need to nitpick (I have background in Ultrasound signal processing).
It's Hann window. Not Hanning window. There is a window function called Hamming window due to which some mistakenly add the 'ing' to Hann window too.
Second of all "In summary, with a 6-second window, we get a tolerable 6-second startup delay that gives a fair time accuracy of 6 seconds and a fair frequency accuracy of 5 bpm (half the FFT resolution)" is bit incorrect. You do get far more accuracy in determining the frequency of that single signal. What the window length determines is the limit where you can separate 2 signals. So using that window you would mix up 2 signals that are 5bpm or closer to eachother and they would look like a single signal. Having 2 humans with similar heart rates in same video stream would thus be unseparable using that length. But you can determine the heart rate of a single individual with far more accuracy.
I've been toying with a similar setup. Most of the time, you can actually get a really good estimate of the heart rate by doing simple-minded peak detection on the raw averaged brightness signal (e.g. using the algorithm here: http://www.billauer.co.il/peakdet.html). However, the iPhone camera can't be prevented from periodically adjusting exposure parameters (as far as I've been able to tell), and I've not been able to stop this screwing things up once or twice a minute. In other projects I've had some success using the FFT and windowing functions of Apple's Accelerate.framework, so I may see now if the Matlab code shown here can be translated to run in real-time on the phone.
You should be able to lock the exposure on the iPhone camera by setting AVCaptureExposureMode to lock (you can do the same for the white balance as well). But I might be wrong! It's been a while since I did camera work on the SDK.
I've seen and used apps using this tech(or similar). Ideally this can be used to evaluate need of emergency service automatically. For instance, if someone collapsed while holding the phone and sensors were able to alert EMS. Idk, maybe wishful thinking.
basically a smartphone size polygraph or a software version of Dr. Lightman ("Lie to me"). Heart rate, eyes expansion/contraction, pattern recognition on face muscle twitching, like of lips, eyelids, ... Could something like it speed up, for example, TSA lines at airports? Automated "checkout" lines like in grocery stores - face camera, hand palm on touch pad, "Are you a terrorist with a bomb? Choose Yes or No" (reminds a gun vending machine scene from South Park if i remember correctly :)
I was just being sarcastic. Crossing immigration while nervous today already triggers interesting reactions from border officers. If/when they start using this kind of tech, I doubt they will be more understanding and patient.
Sorry for that. It seems that it ended up at some social networks today and I'm having much more concurrent visits than often. My hosting provider thought that I was being hacked and closed access for some minutes but now it's up again. I just talked to them and they increased the connection limit, so things should get better shortly. Sorry again, I hope you can enjoy the article and subscribe to the RSS feed if interested (http://ignaciomellado.es/blog/feed). You can also reach me on Twitter @uavster.
The technique behind this, Eulerian Video Magnification, is pretty cool. There is a description and a very nifty video at [2].
[1] http://www.cardiio.com
[2] http://people.csail.mit.edu/mrub/vidmag/