Hacker News new | past | comments | ask | show | jobs | submit login
A Guide to Recording 660FPS Video on a $6 Raspberry Pi Camera (robertelder.org)
445 points by robertelder on Aug 6, 2019 | hide | past | favorite | 100 comments



Hi Everyone,

I do have it already linked in the article, but I should point out that the work I did is based upon the work of (at least) several other key individuals:

A lot of the low-level sensor interaction code is found in:

https://github.com/Hermann-SW/fork-raspiraw

Also, check out his web site with lots of other high speed camera hacking stuff:

https://stamm-wilbrandt.de/en/Raspberry_camera.html

The original fork of raspiraw is (I think) found here:

https://github.com/6by9/raspiraw

And the code for converting RAW images into standard formats uses dcraw:

https://en.wikipedia.org/wiki/Dcraw

I didn't personally write much code to put this together. I would say that my contribution here was mainly to pull together some impressive work done by others that wasn't getting much attention and market it well. Hopefully some of you get ideas and push this even further.


Thanks for posting your work! I’d like to see if I can get much more (if any) out of a Pi 4 which has more memory bandwidth and up to 2-4x more RAM than the 3 B+ it seems you tested with! Have you tried a 4 yet?


I haven't yet. It was actually quite a bit if work to get these videos recorded and document/test the process. I suspect there is tons of low-hanging fruit left in terms of improvements that could be made. Hopefully the steps I documented will be reproducible enough that others can make improvements and create even more impressive demos.


Thanks Robert for the youtube videos and the blog, hopefully many more people will play with high framerate video capturing by that.

But 660fps is not the end for v1 camera, I added section "kFPS videos from kEPS multiple exposure frames" to my github "v1 camera global external shutter" repo: https://github.com/Hermann-SW/Raspberry_v1_camera_global_ext...

In that section a 3000fps and a 9000fps animation are shown (and the tools needed to create them)!

The repo introduction now contains comparison of 20$ diy highspeed flashlight with commercially available one (and with the very costly one you cannot produce kEPS multiple exposures and kFPS framerate videos): https://github.com/Hermann-SW/Raspberry_v1_camera_global_ext...


Thank you for putting together such an inspiring and interesting project!


This is amazing. These framerates are 1/10th of what a cursory google search says I can get out of a top of the line high-speed camera, but for probably 1/100th the price. You could make so many awesome instructional videos with a tool like this.


Which brings me to this Gedankenexperiment: If you buy/build 10 of the raspberry pi based high speed cameras and run them in parallel and could make sure each camera sees the same (we don't consider details here for the time being), is it possible to merge the videos in order to reach the frame rate of a "top of the line high-speed camera"? I could imagine two scenarios:

  1. Start recording with time delay. Probably requires 
     excellent clock synchronisation, a real time kernel, 
     reliable/comparable hardware (gatter timings differ at 
     small timescales). If this works, this would provide a 
     high quality equidistand framerate.

  or

  2. Start recording randomly, synchronize by time or
     with an video event, and merge the frames. This will
     result in a stochastic distribution of frame rates. The 
     usefulness of such a movie depends on what it is 
     supposed to be used for (thinking of scientific usage). 
        Will probably not give nice slow-mo movies. But
     here comes the catch: With interpolation, the shortest 
     reachable frame rate dictates the quality of this
     "superposition camera". How many cheap cameras do we
     need to archieve, on average, the top line high speed 
     camera?
I leave the questions open for friends of Gedankenexperiments ;-)


The raspberry pi camera is rolling shutter, which adds some complications. To properly simulate a high framerate camera, you would want to build each frame from sections of several images that are being simultaneously exposed. Doing so without leaving artifacts at the borders would require some sort of really fancy interpolation, but might be possible.


That's a good question, I'm wondering though, if the shutter speed on commercial high speed cameras is faster, but I could be wrong?

"A Phantom camera he uses goes up to a maximum of 1000 fps in 4K — an exposure time of 1/2000 sec with a 180° shutter speed — and the image starts looking pretty dark on the screen."

I'm just trying to find what shutter speed you can get with the Pi camera.

Edit: Just having a little play:

./raspiraw -eus 500 -md 7 -t 1000 -ts tstamps.csv -hd0 hd0.32k -h 64 --vinc 1F --fps 660 -r "380A,0040;3802,78;3806,0603" -sr 1 -o /dev/shm/out.%04d.raw 2>/dev/null

500 microsecond exposure (1/2000 seconds) seems to run.


> I'm just trying to find what shutter speed you can get with the Pi camera.

You can reduce the shutter time nearly arbitrarily, as long as you increase light intensity. I did place fast rotating propeller just above a 5000lm led, and did two 9us long flashes. See this image on how bright the two blades of propeller look, at both captured locations (the rest is dark, image was taken inside a closed cardbox, without flash everything was just black): https://raw.githubusercontent.com/Hermann-SW/Raspberry_v1_ca...


pee cameras are clocked externally(EMI made them include 25MHz clock gen on camera module itself), so in theory you could connect them all to a single clock distribution source with appropriately shifted phases


How do you disable onboard clock or is it automatic?


You'd likely have to de-solder the one on the board and maybe do a few other small modifications to get it to work properly. I imagine you'd want to use a completely different kind of source and then use some op-amps to both shift the waveform and also then strengthen the signal so that the load of each camera doesn't cause issues.


no analog at those frequencies, you would want dedicated Multi-phase Clock Buffer chip, something like a couple CY7B9945V


I think you could probably just parallel connect the quartz clock terminals on the cameras.

As long as your leads didn't have too much capacitance, it ought to just work.

The phase shifting you can do in software by (for example) restarting the camera hundreds of times till by chance it starts a frame with the phase offset you want, and then just run on from there.


Won't two clocks interfere with each other?


As long as they're tuned close-ish to eachother, they will synchronize and oscillate in lockstep.

I'm talking the actual quartz crystal here, not the output of some digital clocking chip (which you're right, wouldn't work).


its already external, on the back of each camera


But it's only selectively enabled no?


I like this. It reminds me of an idea I had about making an ultra-hd projector by running consumer projectors in a grid with careful alignment and cooling. Your idea is similar, but over time instead of space. Both are just dimensions so I don't see why it couldn't work, so long as you had the timing circuit.


This isn't just a thought experiment, this is how some of the most influential slow motion to date has been shot, from the famous bullet dodge in the matrix to the first 1-billion fps camera. (Okay yes technically the 1-billion one is the same camera run many times but it's the same synchronization problem)


I think that would be possible, just a problem of adjusting all cameras to get overlapping FOV.

There was a MIT student project group that took 16 v1 cameras(!) global external shutter frames and created an animated .gif from that! "Ultra high-speed imaging, capturing matrix style bullet time" https://www.raspberrypi.org/forums/viewtopic.php?f=43&t=2404...


Have some fun: http://graphics.stanford.edu/projects/array/

Reproducing something like this using cheap sensors would require some hardware acumen, but isn't unfeasible. Most industrial cameras, even cheap ones, have external triggers/clocks with nanosecond latency.


A group of MIT students did synchronize 16 PIs with 16 cameras just over GPIOs to shoot at exactly the same point in time. And they built in microsecond delay for adjusting on when exactly the bullet has passed through apple and leaves: https://news.ycombinator.com/item?id=20635132


edgerton would be proud


True, and he started with 1/100,000 second exposure time humming-bird images 83 years ago!

The MIT group used a flash with 0.5us exposure time. I asked for details without answer yet, but assume that it is costly (Vela One does cost 940£) and one-shot, no fast repeating flashes (Vela One has only 50Hz maximal continuous strobe frequency).

I used cheap 2$ 5000lm LEDs with 7$ led driver to convert mains to 38V/1.5A DC. I was able to do global external shutter captures with that led with 4us shutter time, did not try below yet. Most interestingly these LEDs allowed for 8.33us flashes at 9KHz frequency and used that to capture 109m/s flying airgun pellet 10 times in a single multiple exposure frame (9000eps, exposures per second). PI pwm should easily allow for 90,000eps frames for capturing 900m/s 0.50bmg bullets multiple times per frame in flight ...

You can do the same with 6$ Raspberry v1 camera clone (not with v2), bill of materials as well as global external shutter setup, software and example captured frames can be found here:

https://github.com/Hermann-SW/Raspberry_v1_camera_global_ext...


You can get Raspberry v1 camera clone now for less than 4$(!) with free shipping on aliexpress: https://www.aliexpress.com/wholesale?SearchText=raspberry+5M...

And unlike v2 camera, the cheap v1 camera allows (besides high framerate capturing) for global external shutter capturing as well(!). I did capture an in flight 109m/s airgun pellet 10 times in a single multiple exposure frame! https://stamm-wilbrandt.de/en/Raspberry_camera.html#global_e...


The chronos 1.4 I have will do 640x96 @ 21,649fps for $3k. So that's 20 times faster for 50 times the price; pretty close to your estimate. The chronos does have a few other amenities that make it worth the price.


Is it made in the US? Is it export-controlled? Here in Argentina getting export-controlled gear is a pain.


They are made in and shipped from Canada. I doubt they are export-controlled, but I would email them. They were quite responsive to most of my inquires. Just don't ask when they might implement features. I'm still waiting on some stuff that are listed on the 2 year old spec sheet as "*features (that) are fully supported in the camera's hardware, but are not yet supported in software. They will be added in a free software update after the camera's initial release."


That's 500 times the price (unless you're taking into account the raspberry pi itself).


... which you should


You can shoot 960fps at 720p/1080p with some smartphones these days. If you decide to go that route, make sure to get one that doesn't interpolate the frames, but does true 960fps.


Could you suggest such a phone because that sounds wicked awesome to experiment with.


The Galaxy S10 series (including the S10e).


> processor registers on the image sensor are set so that raw data is sent continuously (to be processed later)

Very slick.

Make me wonder what crazy high-speed imaging hacks might be possible with a top-of-the-line sensor used in iPhone/Pixel phones.


The Samsung Galaxy S9 & S10 can record short bursts of 960fps video at 720p out of the box. I'd imagine a hack could extract similar numbers from the pixel and iPhone.


The demo videos don't have a good black level.

Is this processing method perhaps not using a correct dark image, or not correctly subtracting off the shaded calibration rows on the sensor?

If thats the case, it's probably screwing with the gamma conversion, making it look even worse.


Nope, I didn't do any black level/gamma adjustments. I just used the first values that would produce what looked like a fairly high-quality result with ffmpeg and dcraw. There is probably tons of low hanging fruit to improve these results from someone who really knows what the're doing in terms of image processing.


Wow. Never thought about this approach to leveraging RAM for cameras. Awesome guide!

Can anybody chime in regarding the resolution limit (640x64) -- If this is for memory reasons, could we pick alternate values such as 200x200 as long as the total pixel count is the same?


I think you're stuck with the oddball aspect ratio. Part of the CMOS readout process is done an entire line at a time, and there just isn't a way to increase the speed this much without dropping most of the lines. Actual high-speed cameras have similar limitations at the higher speeds.


Deal is "halve vertical resolution and get roughly double framerate", see here for more information: https://news.ycombinator.com/item?id=20633504


> Never thought about this approach to leveraging RAM for cameras.

This is normal for high refresh rate cameras. They store straight to ram then dump to the disk (SSD) after. Especially the higher end cameras, they literally can't store data fast enough to go anywhere else.


>Can anybody chime in regarding the resolution limit (640x64)

Light exposure on the sensor, the faster the frame rate, the darker the image as its a fixed aperture. Photographic buffs could explain it better, but my take is the faster the exposure the bigger the aperture that is needed to let more light in to make the photo look well lit. I don't know if parts of the sensor also work faster due to its circuit design but I would imagine this is also a factor just like its possible to read more data form the outside of a spin disk than the tracks close to the centre.


I read through, and I didn't find a reason that the resolution is 640x64@660 other than that is what was input into the raspiraw program.

Could this record at a higher resolution?


Many image sensors have a 'windowed readout' function that lets you throw away lots of pixels' data, in exchange for a higher framerate.

So for example, a sensor that 1920px × 1080px × 30fps has the ability to output 62,208,000 px/second - and with the same ADC and MIPI bus, halving the number of pixels can double the number of frames per second.

You'll note this article's demo, 640px × 65px × 660fps = 27,456,000 px/second. So you might plausibly get it a bit larger - maybe 640×120 - but you're not going to get HD resolution at this framerate using this hardware.

(Obviously, there's also questions like exposure time and light levels so it's not quite as simple as I just made it sound)


I'm mostly confused by the low vertical resolution. Would something like 320x240 work?


This depends on the design of the sensor.

Basic sensors effectively have one analogue to digital convertor (ADC) per column of pixels. That would mean that if you wanted to reduce the horizontal resolution, you would be disabling some ADC's, which in turn would reduce how many pixels can be converted per second, giving you no benefit.


Would it be possible to buy a second camera and wire up its ADCs to the sensors on the first camera and then read the next set of vertical pixels? Effectively doubling the vertical resolution?


No. The camera image is read row-wise. One could perhaps stitch together the output of multiple cameras and RPis though differing perspectives may make this impractical under some circumstances.


Yes, you can do 320x240 with modified raspiraw parameters. But you will get the same framerate as for 640x240, for v1 as well as for v2 camera.

You can do the opposite and increase horizonatal resolution for line scanner type of application with still impressive framerates:

"v2 camera can do 3280x32@720fps and 3280x64@500fps with frame skip rate less than 1% (tools attached)."

https://www.raspberrypi.org/forums/viewtopic.php?f=43&t=1091...

You can do quite impressive gobal external shutter capturings (with v1 camera only!), eg. capturing a 109m/s airgun pellet in flight on a single multiple exposure frame 10x: https://github.com/Hermann-SW/Raspberry_v1_camera_global_ext...


I haven't read through everything yet, but in the examples video[1] it's stated that they can only read a portion of the sensor to keep a high frame rate and that the camera returns one row at a time. A square image requires reading more rows and takes more time.

[1] https://youtu.be/-gMy8k4nHtw?t=154


Find diagram showing the achievable framerates for 640xH frames for v1 and v2 camera here: https://www.raspberrypi.org/forums/viewtopic.php?f=43&t=2125...

Deal is basically "halve vertical resolution, and get roughly double framerate" (up to cap imit of around 665fps for v1 and exactly 1007fps for v2 camera).

In addition "tools/*_s" tools double vertical FOV while keeping the framerate.


It's probably bottlenecked by how fast the sensor data can be read.


Yea they mention in the video, but don't mention what happens with a higher aspect ratio. I suspect it will cause artifacts, like part of one frame appearing on the previous or tearing or other distortions.

Remember these sensors read a single row at a time. There's a Smarter Every Day video where they show how cellphones get those weird effects when recording plane propellers due to the way they scan images.


Yes, for objects moving with high speed vertically, rolling shutter capturing produces weird frames. But v1 camera can do "global external shutter" as well. See examples on "extreme rolling/global external" shutter for propellers here: https://github.com/Hermann-SW/Raspberry_camera_gallery/blob/...


Limited by the chip that reads the data off the sensor - ASIC of FPGA. Or maybe by the sensor design itself.


if you put the sensor on a really fast step motor, you can pivot the sensor a tiny bit in two dimensions, to get way more resolution out of it, at a loss of framerate, and you'll need to do some post processing.

something like this: https://chriseyrewalker.com/the-hi-res-mode-of-the-olympus-o...


Pixel-shifting is prone to some really terrible artifacting, particularly in shooting situations with movement (incl. any in-frame motion due to wind, etc.) Interestingly, whatever computational photography magic Panasonic is doing in the S1/S1R with their "multi-shot hi-res mode" appears to be far more robust in real-world shooting situations, even handheld. For those who care more about this, Lloyd Chambers has written extensive discussion and analysis of these systems at https://diglloyd.com/ (fair warning: access to the detailed analysis is part of the paid site, a few high-level notes are in the public blog).


The scrolljacking on that site makes it unusable


Literally unusable. I found that reader mode works fine.


Resolution: 640x64


I thought that was pretty low - but it certainly enough for some impressive shots!

[0] Video from article: https://www.youtube.com/watch?v=-gMy8k4nHtw


Then get 10 Pi's to get 640x640?


6400x640?


Nope, that’s a 100 pies...


You're stacking pis and sensors side-by-side, not multiplying them.


What was the lighting setup like for the demo videos? Did you do any processing to reduce flicker?

I get a lot of flickering from CFL and incandescent bulbs in my 240 FPS videos, though I’ve had better results with some LEDs. Considering how much light I’ve needed for good videos at 240 FPS, I’d imagine the lighting requirements at 660-1000 are fairly immense.


Usually, I would try to wait until evening when the sunlight shines directly into my apartment. Firlming under a 60 watt lamp was still quite dark. I didn't do any experiments with post-processing to reduce flicker, but I know there some advanced video editors do provide such features.


Easiest way to avoid flicker is to light the scene with 1000lm or 5000lm cheap leds with led driver (they power the led with 38V/1.5A DC for 5000lm). Total cost of led+driver is less than 9$ with free shipping on aliexpress.


Are there any cameras at a higher price with better stats, but still lower than $2k for a used Chronos 1.4GP/s? I wouldn't mind buying a $1k camera to get 720p 1000+ fps.


There are a handful of flagship smartphones that cost under a grand and technically support 960fps at 720p - the Samsung Galaxy S9, Note 9 and S10, along with Sony's Xperia XZ series. They come with a bunch of caveats including very limited recording time of less than half a second. A number of other smartphones advertise 960fps but fake it using software interpolation.

The Chronos 1.4 is very aggressively priced for what it offers, even if it does cost several grand.


1107 fps on the V2 IMX219 camera. I wonder how fast or how large of a frame it could do on a Pi4 or odroid (article mentions the 640x64 frame is limited by memory bandwidth).


You can do 640x64@665fps with v1 camera, and 640x128@667fps with v2 camera: https://www.raspberrypi.org/forums/viewtopic.php?f=43&t=2125...

In tools directory you can use *_s tools to get double FOV vertically while keeping the framerate. So you can get 640x128_s@665fps/640x256_s@667fps for v1/v2 camera.

There real limit for v2 camera is 1007fps, with can be achieved with 640x75 or 640x150_s tools. You can reduce vertical resolution below 75, but you will not get more than 1007fps (if you find out how to raise that limit, please shout! Deal for raspiraw high framerate capturing basically is "halve vertical resolution and get double framerate").


I think Author meant mipi CSI-2 2 lane bandwidth limit = 1.1 Gbps, ~somewhere around 120MB/s


Everything is doable if you have detailed description of the sensor. Do you know any sensor to get a better resolution with comparable frame rate and detailed specs?

Can RAM limits be improved with a simple compression?


He did his testing on a raspberry pi 3 which means only 1GB of LPDDR2.

I'd love to see how the RAM limits improve with 4GB LPDDR4, especially with usb 3.0 and real ethernet....


Camera interface is not via USB on Raspberry Pi. There's a lot of redundancy in high frame rate, so the best way to improve memory footprint is compression.


At high framerate, there isn't much time to do any type of fancy compression. 660fps means you have 1.5 milliseconds to process the image. The bandwidth will roughly max out between 50 Megabytes/s or 432Mbit/s. LPDDR2 (on the RPi3) can do 800 Transfers are second, at a maxmium transfer size of 4kb, maxing out almost all of your memory bandwidth already.

So you literally do not have the memory bandwidth to do compression while also capturing a good 640x64x32/660 Video.

If you have an RPi4, you get LPDDR4, the memory bandwidth now allows for compression, IF the CPU can handle that throughput (which it likely can't since the RPi4 chokes on such large bandwidths easily).


Ram speed is not the issue, camera interface is limited to 120MB/s, ram bandwidth is >2GB/s on pee3, 4GB/s on pee4.


If you want to compress the frames, RAM bandwidth will be an issue.


Compressing 640x64@660 takes exactly the same amount of work as compressing 1280x720@30. You could even use hardware h.264 encoder as long as you can debayer raw 10bit in software fast enough (or convince Broadcom employee 6by9 to make internal Videocore firmware debayering resolution agnostic, its already fast enough keeping up with twice the amount of data when using camera in 1920x1080@25 mode).

But you dont even need fancy compression, at those framerates things change really slowly so simple delta encoding on raw data will do wonders.


Compressing 660fps in real time is a much different task than 30 frames per second, notably because your latency budget is a lot lower. At 30fps you have 33ms to load the frame, compress it and send it off to the destination, at 660fps you have 1.5ms for the same task, which does not necessarily get exponentially faster with lowered resolution.


You have exactly as much time as at 30fps at 22x higher resolution. You dont need to constantly reuse the same 52KB buffer. Data comes at a steady snail 35MB/s pace. Pee 3 should even be able to save it on USB2 HDD in real time with no compression.


Really? USB2 is 300Mbps for PI3 ethernet over USB2, is it faster with hdd? I ask because snail 35MB/s is 280Mbps.


replied here https://news.ycombinator.com/item?id=20658147

last time I tried USB 2.0 hdd on pee3 did 35MB/s on the dot


You do not need any special heavy compression. Just deltas between consecutive frames should result in 1:100 or more compression rate at such high frame rate. This is assuming you can keep both pivot and current frames in processor's SRAM.


My comment on the USB part is more asking the increased bandwidth make it possible to store it on a disk/network.


Most new devices have realised that storing RAW image data in realtime is impractical - it can be gigabytes per second!

Instead, one needs to use a GPU, DSP, or other dedicated hardware to process/compress the data into a common video or image format.


There is a drawback with Raspbian, while Raspbian itself is open source, the GPU code is closed source and you cannot debug it. What you can do is to reverse engineer the I2C traffic from Raspberry GPU to camera (that is one-way): https://www.raspberrypi.org/forums/viewtopic.php?f=43&t=1091...

And you can hack the camera by injecting I2C commands to achieve effects impossible with the normal Raspbian camera software, like taking odd/even frames with different shutter speed: https://www.raspberrypi.org/forums/viewtopic.php?f=43&t=2126...

For reverse engineering camera I2C traffic, the v1 camera ov5647 datasheet is useful to have: https://cdn.sparkfun.com/datasheets/Dev/RaspberryPi/ov5647_f...


I don’t know enough about video compression algorithms to do what I would like to do, so I started reading a bit more about it, starting with an overview of lossless compression techniques from 2012 [1] and looking at the data sheet [2] for the image sensor linked to from [3].

The first thing of note to me in [1] is the following:

> Digital video cameras perform a process called demosaicking before compression, which normally is lossy. While capturing images, digital video camera, uses only one of the three primary colors (red, green, blue) at each pixel and the remaining colors are interpolated through color demosaicking to reconstruct the full color image. A demosaicking algorithm is a digital image process used to reconstruct a full color image from the incomplete color samples output from an image sensor overlaid with a color filter array (CFA). The most commonly used CFA is Bayer Filter, where the output is an array of pixel values, each indicating a raw intensity of one of the three filter colors. The process of demosaicking, followed by lossy compression is irreversible and the video cannot be improved later on. Further, this process increases complexity, reduces compression ratio and burdens the camera I/O bandwidth.

> [...] if a lossless compression is performed first in the camera itself before demosaicking, then a sophisticated codec can be designed which is needed in applications where high visual quality has paramount importance. This motivated the use of lossless-compression first followed by demosaicking in medical videos where even the slightest degradation result is catastrophic situations. The algorithm uses a hybrid scheme of inter- and intraframe lossless compression to increase the compression and quality of the medical video data. The working of the encoding algorithm is briefed below and is given pictorially in Figure 1.

Skimming through the data sheet, I see a few things:

- Among the features on page 1 it mentions a 10-bit ADC, and “R, G, B primary color pigment mosaic filters on chip”.

- On page 10, there is a block diagram, figure 1.

- Starting at page 52 and throughout various pages of the remainder of the data sheet, there are figures that indicate to me that the output of the IMX 219 consists of mosaic pixel data. So if I am understanding correctly, demosaicing happens outside of the IMX 219.

So I am wondering, with the v2 camera module as a whole, is demosaicing performed on the module itself or in software on the Raspberry Pi? And if it happens on the module itself, can it optionally be disabled so that you can get the mosaic pixel array data?

Since the ADC is 10-bit, each pixel in the pixel array is presumably represented by 10 bits. The data sheet might say, I only skimmed through it on the initial read.

If so, then 640x64 pixels * 10 bits/pixel * 660 FPS ~= 270 Mbps ~= 34 MBps.

A benchmark from 2018 [4] puts the Raspberry Pi model 3B+ at being capable of writing to the SD card at about 22 MBps.

So if we get the mosaic pixel array data, the most simple and naive thing we could do would be to “crop” the image to say 360 pixels wide in memory by skipping past the 140 first pixels of each line and copying the 360 next pixels of each of the 64 lines for a few frames, writing batches of 360x64 pixels of cropped frame data to a single file on the SD card.

If the mosaic pixel array is what we are given by default, or we can get the camera module to send it anyway, then with discarding data we have a starting point. At this point we are able to record for as long as the capacity of our SD card will allow us. 128 GB SD cards are not terribly expensive so it should be possible to record data for like a couple of minutes. Then we can later post process the data.

And if that is possible, then one can start to get really serious and try to device an efficient lossless compression algorithm specifically for our 640x64 pixels of video frames, and perhaps even optimized for different use-cases like horizontal motion only, vertical motion only, and motion in horizontal and vertical directions at the same time.

[1]: https://pdfs.semanticscholar.org/57ee/90a3e59003ec83c2f52aba...

[2]: https://raw.githubusercontent.com/rellimmot/Sony-IMX219-Rasp...

[3]: https://github.com/techyian/MMALSharp/wiki/Sony-IMX219-Camer...

[4]: https://www.jeffgeerling.com/blog/2018/raspberry-pi-microsd-...


Very cool. Can this camera also be hacked for amplified sensor reading for night vision? Infrared or UV photography? Long exposure photography?


Yes, there are NoIR versions of v1 camera, they just miss the infrared filter normal cameras have (around 9$ on aliexpress.com). And there are 12$ versions of v1 camera with (hardware) IR-cut filter, depending on photo sensor on camera IR filter is enabled at day, and removed at night.


I was wrong, the 9$ are for NoIR camera with M12 lens mount and lens. I just ordered 4 new NoIR cameras for 2.84$(!) including shipping! https://www.aliexpress.com/item/32946093276.html


Could you stream the frames over 10G LAN to record longer videos?


There is no Pi with 10G ethernet. New Pi4 has only Gigabit ethernet, Pi 3B+ has Gigabit Ethernet over USB 2.0 (maximum throughput 300 Mbps).

The idea is interesting, did some calculation. For v1 camera capturing 640xH frames can be done at framerate "42755/H+13". Lets forget about the 13. raspiraw transfers raw10 Bayer data, that is 640xHx1.25 bytes, with maximal framerate that gives 42755x640x1.25=34204000 bytes/s=261Mbit/s(!).

So you definitely need Gigabit ethernet of the Pi4, not sure whether Pi4 can actually stream out 261Mbit without losing stuff though ...

Just asked on Raspberry networking forum (261Mbps for v1, 464Mbps for v2): https://www.raspberrypi.org/forums/viewtopic.php?f=36&t=2482...


So it's feasible for Pi4?


No answer on the networking forum yet. I cannot try because I don't have a Pi4 yet. It seems possible. An easy test would be to test whether this command will work on Pi4:

raspividyuv -md 7 -w 640 -h 480 -t 0 -fps 90 -o - | nc someIp somePort

and on someIp server service listeing on somePort just storing the data received. Because yuv has 12bits/pixel which is a bit more than the 10bit/pixel of raw Bayer, the command produces a 316Mbps stream of data for v1 camera (732Mbps for v2 camera when recording with "-fps 180"). For testing the service on someIp can be as easy as:

nc -l somePort > foobar


You should be fine with RPi3B+ internal gigabit NIC

https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=2085...

pi@raspberrypi:~ $ sudo iperf3 -c XXX.XXX.X.XXX

[ 4] 0.00-1.00 sec 37.9 MBytes 318 Mbits/sec 0 257 KBytes


very impressive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: