Back in 2019, I submitted a GSOC proposal for Apertus. They wanted to add frame serving capability [1] to their RAW image processing software OpenCine [2]. Even though my proposal was rejected, I ended up learning a lot from their coding challenge [3] which was mandatory for submitting a proposal.
Reading research papers to understand Demosaicing [4], learning C++ coding guidelines, RAW file format, code reviews and feedback from Andrej and others at Apertus was really fun. At the time I was still a sophomore at college so I asked and made a ton of mistakes. Thanks Andrej [5] and team for being patient with so many of us! :)
Problem with any non-mass-produced camera seems currently the availability of image sensors. It seems that the only small-batch available sensors are for machine-vision and accordingly priced :/ The CMV12000 used here costs $2000-$2200 per sensor - more than many full-frame mirrorless cameras.
Another problem is that there are no development boards as far as I'm aware. That's why this imo could be huge for anyone else interested in making an open camera, the apertus project seems to have a modular architecture and breakout board for components!
That said, the sensor does have a globoal shutter which would be huge for cinema but not sure at what tradeoffs in dynamic range or color-depth eg.
Global shutter is pretty desirable for machine vision applications like robotics, especially if that shutter is also externally triggerable on a gpio pin so that you can perfectly sync up all your cams.
Are broken full-frame cameras becoming available? Where large sensors might be sourced from? I see quite a few sensors available on ebay. Problem then might be getting specs.
Just far too expensive. I'm the exact target audience for this thing, it's beyond scope financially. Create a barebones proof of concept kit, but this is a non-starter solution for 70% of the audience interested.
At that budget? A basic nVidia Jetson Board, a fancy mono sensor of choice, a bunch of extra CSI lanes beyond reasonable scope and I'd prefer to enjoy/embrace a software first pipeline/workflow stack for the data akin to gnu-radio-companion. That's where my itch would be. At this budget I'd want to be experimenting in the stitching sensors game. I'd prefer focus there if I'm throwing money away. Otherwise I'd scratch the itch more wisely with a pi stack :(
Canon 5D markIII with MagicLantern could produce superb RAW footages. I never graded better source material.
Unfortunately, production is huge luck. You cannot preview recorded footages (only BW, low res, low framerate preview). Postproduction require extra step of converting fotages to something more common like ProRes 422. Which is another place for error.
TLDR it is not suitable for professional production because of lack of preview on set.
Professional cameras often do not have auto focus, compare to some RED camera models. You either prepare your focus points ahead of time or you work in teams, where your camera assistants act as the "auto" focus.
Reading research papers to understand Demosaicing [4], learning C++ coding guidelines, RAW file format, code reviews and feedback from Andrej and others at Apertus was really fun. At the time I was still a sophomore at college so I asked and made a ton of mistakes. Thanks Andrej [5] and team for being patient with so many of us! :)
[1] https://lab.apertus.org/T763
[2] https://www.apertus.org/en/opencine
[3] https://lab.apertus.org/T872
[4] https://en.wikipedia.org/wiki/Demosaicing
[5] https://github.com/BAndiT1983