Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: PiShot – capture high-speed strobe images using a Raspberry Pi camera (github.com/revalo)
124 points by revalo on May 13, 2019 | hide | past | favorite | 30 comments



Fantastic! This fascinates me how you have reversed engineered a cheap camera to make it fit for purpose and shared your work. It could have applications in low budget computer vision projects. I am researching global shutter cameras now. Thinking Sony IMX264. Quite expensive in a enclosure from Flir or Lucid. Do you know what sort of FPS you could record? Dynamic range? QE etc.


Thank you! That's super cool :) The fastest strobe we used was a 500ns strobe, which is a 1/(2e6)th of a second exposure. The Pi cameras use the OmniVision OV5647, dynamic range of 67 dB @ 8x gain.

I'm pretty curious as to what you've discovered / worked on so far or have any interesting CV ideas you'd like to see?


do you know how many bits of depth the raw frames for Pi Camera V1 and V2 have?

Do we know the "gamma" curves, i.e. numerical intensity value as a function of light collected proportional to the number of photons?

67dB = 6.7B => brightest over softest intensity = 10^6.7 = 5 000 000

thats ln(10^6.7)/ln(2) = 22.257 bits, that's unheard of for camera sensors I have looked up, so I assume the reported values are "gamma" compressed before digitization.

Would you have interest in characterizing or collaborating on characterizing the exact compression function for the V1 and V2? Relevant literature seems to be Steve Mann's comparametric equations: https://en.wikipedia.org/wiki/Comparametric_equation

(I am interested in high dynamic range, and high monochromatic or color bit depth for an experiment, which will progress much faster if I can start out with a higher dynamic range and bit depth sensor, I will need to oversample to observe a phenomenon, and every bit of increased depth a sensor has compared with another sensor would mean the experiments can be run 4 times as fast...)


I am very new to the space.

I'd probably like to have a go at replicating some of the Blue River, Bilbery ideas but on a budget. That is sensing weeds or crop issues on the move therefore requiring global shutter. Uniform light throughout the day/night is another issue I am thinking through.

Working through fastai when I get a chance but my biggest issue is finding domain specific datasets.


Is it actually taking 2 million pictures per second, does the title just mean that the global exposure takes 1/2000000 of a second?

Really cool results, although that bullet catcher setup is slightly horrifying.


Yep, that's the exposure time.

LOL, yeah, it looks pretty sketchy. It is a proper bullet trap though, in fact the very same one that Harold Edgerton used for his famous bullet through apple image :)


I wasn't aware there are $5 Pi cameras out there. Repo doesn't contain which cameras are used. Can someone point me out?


We bought a bunch of ov5647 camera modules from AliExpress. A quick search on the site yields a couple of listings at about $3.26 per camera, here is one of the listings, https://bit.ly/2W0Xui6


This is a great project but it's not clear to me how the HN title ("2M FPS with $5 Raspberry Pi Cameras") is related.


To take a picture of a bullet, we take an exposure of 1/2,000,000th of a second. This is super challenging with cheap $5 cameras like the Raspberry Pi camera because of rolling shutter, timing, syncing etc and is traditionally done with more expensive DSLRs.

But yeah, I think you're right the title doesn't seem like it's related, suggestions for a better title?


We replaced the title above with a representative description from the article.


Thanks. I wish it happened more often, not just in this particularly outrageous case.


We do it more than most people realize. If you notice a case that needs attention, you're always welcome to let us know. hn@ycombinator.com is the best way because then we're sure to see it. People email us about these all the time.


Thank you so much!


How about: "Operate Raspberry Pi v1 camera in global exposure mode (instead of rolling shutter)"

It's a great project! We used to do this with modified Canon 5Ds at a company I worked for and it was very expensive to set up


Lots of words but it doesn't zero in on exactly how it captures 2 million frames per second.


We use a strobe light instead of a mechanical shutter with a flash duration of 500ns.


How does it work from a code perspective inside the RP?


We reverse engineered the camera module's I2C protocol and figured out how to ask it to to energize all pixel lines at once instead of it going line by line.

The heart of it is just sending a few bytes down the I2C bus, but reverse engineering the register set was the challenging part as the datasheets are given only to OEMs.


Very neat! Was this also tested to see if it works on the Camera Module V2, by any chance?


We spent the first month of the project on the v2 modules. These are based on the Sony IMX219. Unlike the v1 cameras, the datasheets are publicly available and even then, after much hacking around we couldn't get all the lines to expose at the same time :'/


I would also like to know this.


Is it actually capturing 2 million frames per second?

What is the resolution/byte size of each frame versus the bandwidth of the I2C bus? Is there room on the bus for all that data?


It is not - from what I gather it's capturing a single exposure of a scene lit via a strobe to give a virtual 1/2000000s shutter speed.

Still super cool - the meat of what's novel here isn't the image assembly though, but rather the reverse engineering of the pi cams in a global shutter mode - I'm definitely interested in what the tradeoffs are around global vs. rolling shutter, if there's image artifacts or bandwidth issues, or what. Neat work gang!


How often can you get a readout? If it can go as often as 24fps, I can make my digital Super 8 camera with global shutter. :)


This is seriously cool. What strobe are you using?


As cool a project this is, thought, I don't see - I reckon this could be my fault - how is this different from a "let's take a bunch of high speed synchronized pictures with a strobe".

"bullet time" was, at least, animated. I also understand that it is very possible that you can do that (just that are not showing it).


My bad, maybe I should take more writing classes.

I guess what's different here is we're not using a $300 DSLR. We're using a $5 raspberry pi camera.

If we did the "let's take a bunch of high speed synchronized pictures with a strobe" on the cheap cameras, we'd see a single row of pixels worth of image, if we're lucky because of rolling shutter.

What we've done here is hack together some software to get a global shutter on these cheap cameras. Now that we can take pretty high speed pictures with $5, we scaled it up to get 16 cameras and took 16 angles of a bullet going though an apple, all at once. The "bullet time" examples in the repo are not computer graphics, they are 16 individual images played one after another.


This project is great! It's really exciting work, and I love that you open sourced it!

I think you're getting so many confused people because of your new domain expertise in playing with strobes for the class.

Most people are assuming you're synchronizing a bunch of cameras at very low latency and high speeds with a long lived light, but it appears you're using a dark room with a strong, short strobe as a shutter instead.

The real hack (if I understand correctly) is using your global shutter to take in a consistent point in time.

Maybe adding a non-global shutter image to compare and contrast the value of that hack would work in conveying how it works? Maybe a video too? In infrared?

These two changes can really help quickly communicate the work you're doing and get more people interested.

Either way, great work, excited to see what else you make!


Hi, sorry I didn't want to come out as obnoxious; I was on my way out, and wrote in a hurry. I ended up sounding like an idiot anyway.

I think that your project is an incredible one. Any project that goes "We did this with $5 instead of $5000" is totally worth it, from whatever point of view you want to see it.

There's an interesting and IMO relevant trend I see in technology these days, where coupling open technologies with smart people like you ends up allowing what's considered "extreme high tech" to be available to the "normal" people.

I just wanted to point out that may be you're not showing up the full capabilities of what you did.

I take it that if the strobe light is fast enough, you could get the different cameras to fecth frames from different flashes, thus creating a "real" bullet time sequence?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: