Hacker News new | past | comments | ask | show | jobs | submit login
Recording 660FPS Video on a $6 Raspberry Pi Camera (2019) (robertelder.org)
293 points by tosh on Dec 27, 2021 | hide | past | favorite | 63 comments



This is possible by interfacing directly with the sensor MMIO registers. I've always found fascinating how much flexibility hardware provides, compared with the typically rigid functionality the drivers expose to userland.

I wonder what we could make out of many high-end cameras if only we had enough patience and motivated people to reverse engineer everything that has been hidden away by comfort abstractions.


If you want the answer to that question, look no further than the magic lantern project. It’s not supported like it used to be, but those guys were incredible at unlocking what Canon DSLR’s could really do. I was a legend in my film community when I booted it onto a 5Diii and got it shooting 1080p raw. All I did was follow their instructions lol


Last time I've checked there was nothing like that for Nikon? What is the reason? Or there is something?


I've heard that Canon's DIGIC processors are just slightly customized versions of the DaVinci media processors, which have a all documentation openly available, while Nikon's EXPEED processors are based on a closed product line by Socionext with basically zero public documentation and heavy customization.

Though there used to be a project which worked on a few of the older EXPEED-based cameras.


NikonHacker.com


thank you !


Those guys were my intro to tech! I remember shooting RAW on an old 50d and Dual ISO on my 6d!


Was your 50D a smoking just after a minute? Lol


No it handled it like a champ! It could actually record higher resolution than my much newer 6d. It used a compact flash which was much faster than an SD


Fun stuff! Yeah I did the raw hack on a 5diii. Gorgeous image, couldn’t believe the DR I was getting. Canon should be ashamed for keeping all the firepower under lock and key! Guess that doesn’t help love C-cameras though lol


Fun times! Thanks for bringing me back


for anyone wondering, "MMIO registers" stands for "memory-mapped I/O" registers:

https://en.wikipedia.org/wiki/Memory-mapped_I/O


> everything that has been hidden away

It's not hidden, it's stripped according to actual product specs.

Industrial cameras (Basler/IDS) will give you access to most of the sensor capabilities if you want to prototype something.


Yeah but for the price of a good systens canera you will get only a basic industrial one with a tiny sensor and no AF


That's the price of flexibility.


It's not some inherent price. It's just an arbitrarily imposed one.

Older products, even early digital ones from 70s and 80s, were more flexible for the end user (and often came with manuals and schematics) even in their consumer version.


An industrial product has different requirements compared with a consumer product. I am really curious how many hours of continous operation my camera can sustain.


Magic Lantern is a firmware mod for Canon cameras and it gives you more features and control over your hardware. I love this kind of low level mods.

https://magiclantern.fm/features.html


Also see CHDK [1], a FOSS firmware replacement for Canon point-and-shoot cameras.

[1] https://chdk.fandom.com/wiki/CHDK


Ironically, I find the popularity of hardware hacks to be a sign of general stagnation, because there were periods of time in the past when they would not give as much boost in performance as a new version of something in a few months to a year. They were typically not worth the time and would be rendered obsolete, but now more and more they are worthwhile.


Magic Lantern basically is the reason my t3i got me into film festivals lol


Link is dead.



Seems confirmed by the Magic Lantern team: https://old.reddit.com/r/MagicLantern/comments/rpvv2n/fyi_si...


works for me


Does it work currently? I can't get the domain name to resolve.


Okay, that's bizarre. When I first clicked the link, the domain name failed to resolve for me too. When I reloaded it, though, it resolved. If anyone else is having trouble, give it a bit and try reloading the page.


the domain magiclantern.fm expired on 12/13/21, it's name servers were updated on 12/27 to EXPIRED.DNAPI.NET DOMAIN.DNAPI.NET

25.0%: Refused at domain.dnapi.net (193.227.117.66)

25.0%: Refused at expired.dnapi.net (194.50.187.66)

25.0%: Query timed out at dns1.idnz.net (108.166.170.106) While querying domain.dnapi.net/IN/A

25.0%: Query timed out at dns1.idnz.net (108.166.170.106) While querying expired.dnapi.net/IN/A

Some opendns resolvers have it cached still: 176.9.31.214


I've tried to resolve the domain via 8.8.8.8, 1.1.1.1, 9.9.9.9, 208.67.222.222, and 4.2.2.1 without luck. They all return status: SERVFAIL.


Dead for me too.


The major limitation throughout this process is the speed at which memory can be copied and transmitted. Only 20-40 seconds of video can be recorded at a time due the memory exhaustion of the Raspberry Pi. Also limited is the resolution of the recording: On the $6 camera, a maximum of 640x64 resolution can be recorded due to limitations on memory bandwidth.


Does anyone know if that means the Pi 4 can record for a longer period of time (it supports up to 8GB RAM iirc)?


Probably some, but not to the degree that you may think. This relies on DMA in the CSI controller, but only a fraction of a pi4's RAM is accessible by DMA. Basically from the video core side, there's only a 32-bit bus to main memory, and it has to share that with MMIO space and there's mirrors with/without caching sort of like a MIPS. Probably maxes out after a couple GB or so.


Could you split output from a single camera to multiple pi's?


I haven't read the article, but the parent comment makes it sound as though it's not so much the capacity of the bus, but rather the speed of the bus itself. This is usually expressed in terms of a combination of CAS latency and frequency in MHz, the Pi 4 being DDR4, is likely in the neighborhood of 2133-3600Mhz. We don't often think of reading/writing to RAM as taking any time at all, but in all actuality the speed of memory is often a significant bottleneck in high performance systems.

If we are just talking about 720p 30fps, then we likely aren't really that close to those memory bottlenecks, so the only limiting factor would be how much storage space you have available to that device.


Its nonsense as I already wrote last time it was posted 2 years ago:

Ram speed is not the issue, camera interface is limited to 120MB/s, ram bandwidth is >2GB/s on pe3, 4GB/s on pe4.

Compressing 640x64@660 takes exactly the same amount of work as compressing 1280x720@30. You could even use hardware h.264 encoder as long as you can debayer raw 10bit in software fast enough (or convince Broadcom employee 6by9 to make internal Videocore firmware debayering resolution agnostic, its already fast enough keeping up with twice the amount of data when using camera in 1920x1080@25 mode). But you dont even need fancy compression, at those framerates things change really slowly so simple delta encoding on raw data will do wonders.


According to the video in the article, the data rate is 66 MB/s (100kB per frame). So a Pi with 8GB of RAM should be able to capture almost 2 minutes of footage.


Then why not tunneling the sensor output through the lowest available network layer using the NIC, then reassembling it on a faster machine connected point to point (no IP stack involved) to the RPi?


At that point, the NRE cost would dictate to buy a high end system


It looks like the repo [0] hasn't been touched in a couple of years so it's tough to say what if anything would need to change, but this application sounds like it would benefit from a Pi 4.

[0] https://github.com/RobertElderSoftware/PatientTurtle


Would it be possible to go for a lower framerate and gain more flexibility regarding the resolution?


That’s how it works with more conventional camera set ups. I guess it just boils down to how they program it but there’s no reason that shouldn’t be the case.


Would parallelizing it to two distinct cores help with the issue?


From my reading of the issue, no. The limiting factor on the Pi side is available storage buffer. The camera sends raw files down the wire and the app caches them in RAM before concatenating them with ffmpeg. Could a Pi4 with USB3 NVMe be used? Maybe, if the writes are faster than the camera.

https://www.jeffgeerling.com/blog/2020/fastest-usb-storage-o...


No, this is about memory bandwidth.


No, it is not. Camera interface on the pi cant work faster than 120MB/s. Do you really think 120MB/s is a bottleneck?


Not if you're bottlenecked on memory bandwidth or capacity.


The 1007 fps he said he got out of the US$6 V2 camera is over 8 times the 125 fps at which exporting the camera from the US requires an arms dealer license: https://www.bis.doc.gov/index.php/documents/regulations-docs... but that only counts if the shutter speed is under 1 μs.

So these cameras should still be safe from ITAR.


I'm a bit confused here. There are a few Sony cameras (the RX series) that have a 960fps capabiilty, with exposure times of under 1/1000th of a second. That's around 8 times the 125 fps limit set by ITAR, yet those cameras are sold pretty much everywhere around the world. What am I missing/not understanding here?


1 μs is less than 1000 μs. (And, yes, also they are probably not made in the US.)


The 1 μs is the time the electronic shutter takes to act, and there's no spec sheet for that that I'm aware of in the Sony cameras. As for exposure times at high frame rate, they go up to (down to?) 1/10,000th of a second, so definitely the electronic shutter must be working faster than 100 μs to reach that.

But as someone already pointed out, my main issue was understand the "International" (the I from ITAR) as meaning that it was a treaty signed by many countries, when it actually means "exporting from the US to International markets".


The language in the law is "electronic shutter speed (gating capability)", which I think is the same as the "exposure time." I mean, "exposure" is sort of a misnomer for digital video cameras: the focal plane is exposed to light all the time because there's no shutter. It's just that some of the time it's turned off ("gated"). And when you set the "shutter speed" on a mechanical camera to 1/60, that's the exposure time. You seem to be talking about something slightly different, like maybe the sensitivity rise time?


Japan doesn't have the same export laws as the US.


Right, that was it. I just took the "International" part of the name to mean as if it were signed by many countries, not a "from the US to the rest of the world".


As webcams were hard to come by during the onset of the pandemic, I resorted to a raspberry pi + HQ cam + lens + raspistill preview + HDMI capture device (instead of usb). I won't defend the total cost (probably more than an acceptable webcam). The HQ cam results were quite good and seemingly high enough framerate (not 660 fps though!). Added benefit is interchangeable lens options including ones that will give the bokeh effect.


Great timing on this article. I have a rpi4 and a pi cam v2 which I want to have some fun with and was in need of inspiration. I am currently using it as a security camera while I am away on holiday but wish to have a play both with slow motion video and perhaps some object detection using opencv. The latter I have already tried but all the articles I found seem out of date so I will need to spend some more time on.


This could be a big step toward ultra low latency streaming with an rpi.

I tried to do this (low latency streaming) circa 2016 and ran into two issues, one of which was the high framerate or subframe capture and the other was subframe encoding.

With high framerate capture you can fake the subframe capture by capturing a whole frame and encoding just a subset.

Now to get the h264 drivers to do subframe encoding…


I'm interested in synchronizing sensor data to the camera stamps. For example, strain gauge to measure model rocket engine performance. Does anyone know of a project or approach that would allow for synced high speed sensors?


Low tech approach: I wonder if you could put a flashing LED in the camera's view, and sync that with the strain gauge data logging, kind of like an old fashioned move clapper board?


That sounds like an excellent idea, thanks!


An approach I’ve seen used in the past was to add lines to the video frame and write other sensor data in “extra” lines in the frame before writing to disk. You can then parse the data after the fact and know what the sensor data was for each frame.


Would it also be possible to increase the exposure time to above the 10 seconds which is officially supported by the raspberry pi V2 camera?


That's amazing. I couldn't even get decent 24fps out of it....




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: