I wish them all the best - it's high time people had actual control of their cameras. I mean both control over the hardware and repairability. While CHDK [0] may offer the first, they do nothing about the second.
As much as this is important, I am convinced Open things are most useful at the lower end, opening doors to people who would otherwise just not be able to participate.
I would have a hard time justifying getting one of those for myself, because of the price, as a non-pro.
6-7 years ago, I would have been saving up for this. That was a very different market for video production professionals.
I never liked hacking cameras that were not made to be hacked, but this one is. It’s not the first, but the other projects appeared to be barely getting by.
The sad part is just how long this has taken to come to fruition, and how little sense it makes economically at this point.
For one, many of the problems this solves in a holistic and better way have found standard solutions widely implemented in the indie production market.
RED cameras are fairly prolific and accessible to a lot of people these days, and other imperfect solutions are popularly adopted as well.
While this option is all-around more capable, the modding workflow is still foreign and will take some time to adapt.
Purely economically, there is a lot less money flowing into indie production outfits, particularly from advertising and indie entertainment enterprises like Youtube-online media channels. Advertising is embracing data based decision models that override costly creative production efforts that would properly utilize a camera like this.
In short, this thing is for artists, and artists are screwed.
If you are wondering who the target audience this cine camera is for then you are not the target audience.
To those asking why it doesn't have AF: no cine camera has AF.
In regards to lens selection; the camera seems to be modular so there shouldn't be much stopping Axiom or yourself from getting a different lens mounting plate.
While it's accurate that cine cameras use a human focus puller, its an oversight in this cameras engineering not to include a way to drive the len's focus or the aperture. (Blackmagic's early cinema cameras used these 'dumb' mounts, too, but it frustrated users because they couldn't electrically run the len's controls.)
Having an API for the lens and aperture would have opened it up for developers to use these creatively, such as focusing on one point in the frame and then zipping focus to another point using easing functions.
Blackmagic make video cameras for a specific niche and they're great (I want one myself). But most of their customers buy into the Micro 4/3 mount system. PL mount type cine lenses have never had autofocus and likely never will. The job of the focus puller is not just to keep things in focus but to adjust in harmony with the mood and pace of the scene. There are separate remote lens control systems for when a camera is mounted on a gantry and so on. Framing and focus are treated as distinct jobs on a large set.
Cine lenses don't have camera-operated controls, so it wouldn't work.
You could control the aperture of some 'interesting for cinema' lenses (Zeiss Otus for Canon mount, for example) from the camera, but this would require unwarranted amount of reverse engineering.
Cine lenses almost universally have variable aperture. When a zoom is "constant aperture" what is means is that the maximum aperture is the same across the entire focal range, not that you cannot close it down when needed.
Sure and that's incredibly useful for event and documentary film making.
When it comes to cinema _movies_ (while it's true the EOS C300 Mark II has dual pixel AF) there will be a focus puller.
Relying on AF will end up biting your ass on a film set. Auto-focus doesn't know about the aesthetic of the film, only that something (a face) is in focus.
On (movie) sets the cinematographer will be making those decisions, not AF; while it does, "...[enable] the user to see if they are focused in front of or behind the subject through visual observation, [allowing] quick and accurate manual focus to be achieved" it should only be used in movie making as a tool and not a crutch.
You're technically correct on the existence of AF in the C300 MII.
Edit: Cine cameras are beginning to delve into AF but it is not replacing a focus puller anytime soon.
It's a great effort but such development is too slow and resource limited to really challenge commercial solutions. They don't get to make their own ASICs (e.g. Canon DIGIC) so they need to use limited and expensive FPGAs; they can't make their own sensors (e.g. RED Dragon) so they need to use CMV12000 which is a sensor for scientific and industrial applications - fast, but not optimized for image quality (from my limited knowledge! I could be wrong). Most likely they won't get access to latest and greatest OEM sensors like Sony BSIs.
By the time it's out, you could buy either a fully integrated solution that's far ahead technically - from the likes of RED, Canon, Blackmagic etc - or if you're after some exotic application you could just use an industrial camera from IDS or Ximea.
Yes the customization and the abilities around this camera could be endless, but it'd take something really special on that end to overcome to hardware limitations coming from the development model and to make it a viable professional tool.
I'm not sure your concerns are completely valid here. We've reached the point where consumer phone cameras are good enough for 90% of photography. At this point there's huge value in giving better access to the platform itself to start applying machine learning techniques directly on the camera.
In addition, the FPGA you call limited I'd argue gives access to powerful reprogrammable logic. Why hard-code image processing algorithms when you can update them as new techniques come out?
Not only is this an amazing sensor (180fps, global shutter) but it's using cutting edge technology. I'd argue that it's possible sensor technology is reaching some of its limits and an industrial sensor here can match the quality of DSLR sensors. It may only be 12MP but Google and Apple have both shown that intelligent algorithms can produce amazing results from sensors that size.
>> At this point there's huge value in giving better access to the platform itself to start applying machine learning techniques directly on the camera.
What does machine learning have to do with photography? If there's an explanation for that, then I'd add why does it need to happen on the camera?
I'd agree with parent that ML is one of the killer apps of more open professional camera hardware.
Nowhere I can think of (other than Android, but a lot of caveat there) has an open hardware upstart challenged entrenched commercial players for legacy needs and won.
Where you do win is by targeting emerging needs, especially ones legacy commercial players are ill equipped to take advantage of (due to institutional inertia or ugly tech stacks).
As for why ML on edge devices: a large amount of work is going into running models (or first passes) on edge devices with limited resources (see articles on HN). I would assume they have business reasons.
But offhand, almost everything about vision-based ML in the real world gets better if you can decrease latency and remove high-bandwidth internet connectivity as a requirement.
It relies on the fact that nobody takes really original pictures, hence you can build a large enough data set and apply learned techniques/attributes/behavior to "new" pictures. Computer vision = small standard deviation.
I think the point is that the low-level image sensor functions such as timing, readout, buffering, error correction etc don't change much. Therefore its more cost effective to do it in an ASIC rather than pay a premium for FPGA.
Give it some time - they just launched! If it becomes popular they will have more funding to work with and have the ability to advance the tech. You don't enter a patent briarbush market with the best solution the world has ever seen, you make a good enough showing and grow.
I use Panasonic's midrange GH3 and GH4 cameras for video. 16MP sensor, Micro Four Thirds mount. For general purpose cinema, they will shoot better than the sensor in this camera will (which isn't to say it's the best thing in town, it's not, but that's kind of my point) and the stuff they're flogging, like on-the-fly color conversion, is not particularly useful in the center or edge cases compared to getting high-quality, raw, as-ungraded-as-you-can-get material to work with in post.
It's a neat idea. Your implication that it's good-enough doesn't scan to me, as even a prosumer videographer. Maybe I'll be wrong--but Blackmagic is already vastly ahead in the pure-cinema space, so...
Their competition is not GH3/4. Its other cine cameras. GH3/4 is not even in the same ballpark. 12ish stops compared to 15 on the black magic and apertus. Global shutter vs rolling. It has 300 FPS at 10 bit and 132 FPS at full 4k res! Its camry vs ferrari.
Of course the GH3/4 are not in the same ballpark--but in terms of image quality, they're about where you'd slot this one (what you lose in sensor size you get back in sensor density, at least in the general case). 300FPS is cool if you need it, but that's a pretty rare thing--image quality's a constant need.
The Axiom camera doesn't appear to be in the same ballpark as those other cine cameras, either, which is the disjoint in here for me. It's trying to make its bones on being "hackable" instead of "a great camera". I'm not sure there's that middle ground, unless it happens to come in significantly cheaper than the BlackMagic stuff (which is my next upgrade, they're fantastic).
Well you're dismissing things like on the fly color correction, when pre-grading is an important feature for a lot of productions. High FPS is also not a rare requirement. Sports/Action/explosions/water splashing/hair commercials/etc/etc all benefit from high FPS capture. These are all features that the competition advertises. Things like high dynamic range, a file format that allows for easier post production, etc are real tantgible benefits that they give you. I'm not quite understanding where you're coming from.
But isn't the point of this to be able to hack it and add things that you need for your use case? Nobody said the arduino or raspi was meant to replace your day to day compute. This is targeted as a hackable base system to learn about the technology and stimulate innovation. Right now hacking cameras requires insanely niche knowledge and reverse engineering skills.
If the point is to hack, you totally should buy an Arduino. (I just bought my college-age EE brother one. He's messing with it right next to me. It's cool.) It is also orders of magnitude cheaper.
But for video and video specifically, a "hackable base system" is not a raison d'etre to exist beyond the niche-of-a-niche hacker community who's not going to pay you materially for the thing in the first place. This stuff is expensive! Near as I can tell, something like this needs a reason for videographers, and not computer people, to pay for it to get the traction needed to allow it to expand and progress. The set of people who need to "add things" to a video camera for their use case is small because there are few use cases that are not better undertaken by getting the highest quality, cleanest raw footage you can (which is not helped by adding, like, inline color correction), storing it in the least-lossy format you can, and then taking that output and transforming it down-the-line either in post/editing for recorded media or in your better-equipped visual mixer for live.
The most interesting "hardware hacks" I can think of for cameras would be in the focus/image stabilization arena, but I don't think you get much insight into that with the passive E-mount (could be wrong, I don't use Sony) as opposed to an active E-mount or Panasonic's Power/Mega OIS stuff in the MFT. For Power/Mega OIS in particular, my understanding is that the sensors are in the lens but the brains are in the camera--could be some novel things to do there.
This is opinion, but opinion from a passionate cine nerd:
Good stabilization techniques don’t happen in the lens and passive E mount is perrrfect for this camera.
Stabilization is a mount’s job, and not really a hack. Most camera hacking these days is done with SLRS which are not “real” video cameras, for lack of a list of technical explanations. Magic Lantern is a popular DSLR hacking tool and it’s problematic to use. Think trying to use Windows as if it were Linux.
For a traditional cinema camera, I agree that a passive E mount makes total sense. For a video camera where the focus is "hacking" on it, I disagree, because it reduces the breadth of "hackable" bits of the thing. This gap goes back to my original "I don't know who this is actually for".
I would, however, contest that stabilization is just the job of the mount--optical stabilization relies on the lens, and it's pretty valuable to me. I've broadcast with both Panasonic's Power OIS lenses and non-OIS lenses and the difference is pretty stark. (And given that live broadcasts pretty often just use Panasonic GH's or BlackMagic's Pocket or Micro cameras, it's basically the same tech and thus something to consider.)
Further edit: 'bprater made a good point elsewhere in the thread: an active mount would allow for driving focus through software, too. I don't mean autofocus, but rather the ability to automate focus. For example, I wrote an application to control my video mixer to better be able to do a one-man show and my GH3/GH4 have decent mobile apps for one-man control. Being able to have the camera remember correct focus settings when I'm moving between standard spots in my studio would actually be a pretty useful thing to be able to trigger without getting behind the camera! Which, once more, goes back to "hey, so who is this SSH-capable, 'hackable' camera actually for?".
Overall, I think your point is well-made. The SSH capabilities are something like tone-deaf considering the market. But, I'll answer who the camera is supposed to be for.
It's for me, and other nofilmschool.com readers. But, I'm not going to buy it. I've awkwardly transitioned to using CGI as my medium, but if I were still using camera, I might be considering a purchase of one of these.
It's the latest in a chain of attempts at upending the big companies who lock features on the cameras and have single handledly held back indie film production by miles, for years.
None of these attempts really get off the ground. RED gave the sight of being a savior many years ago, but RED's plan all along was just to bit off the opportunity and ultimately side with the camera nazis.
As for the mount, I never even learned to use focus electronically controlled by the body, or zoom, or stabilization. I would have no use for any of them. Passive E-mount does everything I could conceive of needing.
When somebody who this camera is for wants to control focus wirelessly, they rent a Preston system or one of the newer ones like a Lenzhound.
In the interests of accuracy, the Axiom project has been in gestation for >5 years so they're off to a slow start. But they have a lot of good will in the pro cam community because they were careful to keep delivery expectations low and warned people not to structure projects around the hardware any time soon.
Latest and greatest is severely overrated - that's something you throw money at if you already have a hot project and need additional hype. Sometimes it works out, sometimes not (eg Collateral was a very successful ad for the Viper camera's relatively good low-light performance; Public Enemy not so much for a different model).
I've been following this project for a few years and while it's got a slow development/adoption curve it's on the right track both economically and technologically; eventually it'll become the Linux of cameras. The thing is, we're already in an era of technological abundance where digital photography is concerned; commodity solutions are already good Enough for pretty much all commercial purposes. You can shoot an indie feature on an iPhone; even huge budget films like the Avengers franchise use Blackmagic pocket cameras as crash cams (cheapish cameras that you set up to run automatically stunt cars to provide a second or two of dramatic first-person footage).
There will of course always be more highly engineered offerings out there, that will be important for highly specialized tasks like astronomy and scientific work. And ultradense sensors might be deployed to allow photography at multiple focal depths - but even that is likely to find its way into consumer gear before lone. At this point the industry focus is no longer on which camera has the best specifications as on workflow and support. Blackmagic ave done incredibly well for themselves with a (relatively) technologically inferior sensor/package but by being more open than Red and thus building a larger and stronger community at a somewhat lower price point, for example. It's a good example of the tortoise beating the hare over the length of the race.
You're also giving up a huge compromise in lenses when going with this approach. When it comes to photography lenses mean more than the camera 95% of the time(unless you're doing sports/action photography, where you need good AF).
With this it's all manual + adapters which always has compromises compared to use in their native format.
It's super-cool technically but I don't see it taking over anytime soon.
I think you might have missed the part where this is a digital cinema camera. Cinema cameras rarely have AF, and are almost always manual lenses with a dedicated person assigned to pull focus.
Also, being able to adapt lenses is a huge bonus. Look at the Micro Four Thirds market. There are so many adapters available and in wide use to be able use a wider selection of lenses. Pretty much the only way Sony cameras are used. The selection of Sony native mount lenses limited, but adapt that to a PL mount or EF mount, and the world opens up significantly
Ah, good call. It was missing from the title so I was assuming they were going after the traditional camera market. For cinema they might have a better shot.
I've got a non-trivial amount of Canon gear(300F4, 35F1.4, 135F2, etc) and looked into the Sony route since their A7 line looks killer. In the end all the adapters had compromises so I decided to stick with Canon. From all reports the AF drive on the adapters still have a bunch of edge cases.
The pixels on the A7Sii are larger than other similarly sized sensors. That's why the megapixel count is lower. Larger pixel size means more photons absorbed in same amount of time. I've also had one connected directly to a telescope. It was an extremely fun night as I could use the Live Mode to see what the telescope was slewing to without taking a 15+ second exposure. It was pretty amazing. I want one for this specific purpose alone.
However, as someone that has shot a lot of video footage with no light other than a full moon using the A7Sii, I highly suggest getting the cable connected remotes for this camera. Trying to fly this body without a remote on a shoulder mount with lenses, follow focus, monitors, etc is brutal. The start/stop button is so tiny and hard to press, it makes you want to throw the entire rig as far as you can. The button is more along the lines of a reset button that you need a small pin to press.
It's pain points like this that make a stills camera that can shoot video much different than a true digital cinema camera
I do wonder about the lack of any autofocus system. Maybe that feature doesn't matter as much for a cinema camera. There are certainly some lens mounts with autofocus control interfaces that are old enough to no longer be patented, so it seems like they could at least do contrast detection AF.
The Chronos high-speed camera[1] is a lot more interesting as a product, though not fully open-source. The software is FOSS, the hardware isn't. It has the advantage of being for a niche application (high-speed photography) and substantially cheaper (about 0.1x the price) than similarly capable commercial cameras.
> they can't make their own sensors (e.g. RED Dragon) so they need to use CMV12000 which is a sensor for scientific and industrial applications - fast, but not optimized for image quality
> from the likes of RED, Canon, Blackmagic etc
Guess which sensor in in some of BlackMagic's offerings?
I'm pretty sure the Blackmagic used CMOSIS sensors at least on the 4K Production Camera.
On Axiom - I think it's great to have open source cameras like this, but the price point needs to come down significantly before it will get traction. Also, with indie camera companies expect long delays - Axiom's been years in the making, as have KineFinity cameras (another independent RAW cinema camera brand).
From a practical POV for indie filmmakers, the resale value of these cameras is also questionable when compared to BlackMagic or RED.
I think much of the benefits of RAW (dynamic range) will actually come from 10bit HEVC cameras shooting HDR. GH5 is one of the first of those, in 1-2 years I expect every smartphone will shoot 10bit HEVC that will have significantly more latitude than current 8bit video, even at small sensor sizes.
I would actually love to see mobile SoC's like Qualcomm 845 (4K 60fps, 10bit HDR, Rec 2020 color gamut) and Android used on semi/pro cameras, with a good OLED touchscreen, relatively large (1" or larger) sensor and maybe m43 mount lenses. Something like BMPCC but with updated sensor and chipset, running Android.
Like most things methinks...but the majority non-hacking users get to benefit from the ones who do like to hack around with internals.
Plus its (user) hardware/software upgradeable, some fancy new thing comes out you just have to bust out the toolkit instead of having to buy a whole new camera.
I'm thinking something like a light-field sensor with the "LSD simulation" firmware would be a perfectly sensible upgrade.
Wow, congratulations to everyone involved! How absolutely fantastic that there is another completely open device we can hack and enjoy! Looking forward to seeing what ingenious modifications and applications people will be coming up with!
A while ago a bunch of makers of documentaries and journalists asked camera manufacturers to support encryption, but most of the companies, if not all, ignored them. Does this project support encryption?
Its reprogrammable so it most likely could be patched to support encryption between the sensor and storage module in the FPGA firmware.
Though, I have to say, that sounds like an anti-feature -- instead of just taking a few shots they'll just take all your photos and throw you in jail for espionage because "if you have nothing to fear you have nothing to hide."
Unless it's stored in an obfuscated vault. Not saying this is immune from detection etc, but it may offer journalists some form of plausible deniability.
The project should support encryption if this is something that some members of the community expect or require.
Encryption isn't being pursuing as closed protocols constitute what the camera is designed to steer away from, but it may be relatively straight forward to implement where required, eg. CryptSetup and Linux Unified Key Setup.
So I do photography as a hobby as well as some semi professional stuff (I shoot for the pros at hockey tournaments, I'm not as good as they are but close, they get keepers every 13 seconds I'm more like 20-25 seconds; seems like a long time but that includes the time to look at each shot and delete the crap. They are faster because they know when they take the shot if they missed and they delete without looking at it. I'm not that good, I have to look.)
I shoot hockey with a Canon 1DX II, before I got that body I used a 5DIII. If I'm shooting for me, I use a Canon 200mm f2; if I'm shooting for the pros, they want more like a 300mm so I add a 1.4x teleconverter.
For the people who don't know what any of that means, this might help: the 1DX II is Canon's best sports body, it retails for about $6000. The 200mm + the 1.4x is another $6000.
So truth in advertising, I've got a lot invested in my current kit (not just that stuff, I have a number of other Canon lenses, some Sigma, Rokinon). So perhaps I'm not objective.
All that said, I don't get this camera at all. No auto focus, no viewfinder, those are complete deal breakers for me (electronic viewfinder doesn't count unless it is 100% as fast as a normal view finder. I'm timing shots so I get the puck going into the net, that means a lag as small as a few milliseconds screws me up. And when I say "me" I really mean any sports photographer, or action photographer where the work flow doesn't allow you to use burst to hopefully get the right shot by accident).
It looks like lots of cool technology but I'm definitely not interested in owning one.
So who is interested in owning one and what would you do with it? Where does this camera shine?
I'm a software engineer and hobbyist cinematographer. This camera is meant for people like me.
The reason for a camera like this is to open the tool set up for active development, in a way that traditional camera-makers haven't opened their hardware up for access.
Here's an example: high-ISO shooting. What makes it possible for cameras like Sony's a7s series to shoot in nearly dark conditions? Is it the sensor or have the engineers leveraged the fast CPUs to do real-time noise reduction?
By giving engineers access to the hardware, we could start exploring high-ISO programming. Similarly, we might learn how to auto-calibrate lenses in new ways that could take the cheap 'nifty-fifty' lens, apply machine learning and have it perform like a $3k Zeiss lens.
Even a topic like color science, the holy grail of company's knowledge base like Canon or Alexa, could be explored by a wider audience of scientists and engineers. Until we can get our hands on the hardware thru code, most of this is nearly impossible, except projects like Magic Lantern.
OK, that makes a ton more sense. And I couldn't agree more that the camera companies are shooting themselves in the foot by not opening up their firmware. I'm a software guy too and there are changes I'd love to make to Canon's menu system (I really want to be able to map a button to any part of the menu system, in particular a button that got me to the in camera crop feature that the 5DIV and 1DX have).
I suppose they think they have secret sauce buried in there but by keeping it secret they aren't getting any patches from us hackers.
> What makes it possible for cameras like Sony's a7s series to shoot in nearly dark conditions?
Mostly because the A7S(2) has a 35mm sensor with "only" 12 MP - it's simple physics, the individual pixels are so huge compared to 50 MP+ cameras and thus much less susceptible to noise.
In addition the A7S line, during 4k recording, does not do binning or other quality degrading post processing (because its resolution is so "low" that 1:1 4k can be done). This reduces processing load as well.
For computer vision cars-vehicle automatic driving. We want to preprocess data in real time with the information provided by sensors(vehicle inertial sensors, tacometers and stuff). What is called "Sensor fusion", something very similar to what humans(or any animal) does.
We already use commercial cameras, but the ability to integrate directly our hardware(our own FPGAS, in the future ASICs) with cameras low level is simply very tempting. In software we use Linux for the same reason, there is no way you could integrate that much with proprietary software.
We need real raw data. Not raw data already preprocessed by the manufacturer, and total control over it, specially lighting. A vehicle is moving and if the preprocessor could not handle the change of lighting entering-exiting a tunnel fast enough on a bright day people could die. You need total control and stability and repetitivity.
Without this we will have to design all by ourselves, which is very expensive.
It looks like they are focusing on cinema so we are not sure about this, but it could be a very interesting possibility to explore.
Exactly. For a custom imaging pipeline, constructed via application-specific design, every element that is a black box invalidates the measurement aspect of the output. The sensor control system is an intimate part of that system. For example, machine vision cameras typically allow alteration of imaging parameters, but at the cost of lost time meaning dropped frames. So rapid iteration to convergence, or rapid cycling thorough parameter settings, means that the device might go down to 5fps, or 1fps. Magic Lantern (collaborators with AXIOM) corrected exactly this kind of defect for certain Cannon DSLRs, among many other things.
It is possible to partner with sensor or camera makers, in order to get the required specs and level of integration, but it is extraordinarily expensive. And only a handful of companies can do this. For the individual, it's out of the question to partner with a camera maker. So for the interested individual, AXIOM provides a real benefit. And for commercial development, companies like FLIR Integrated Imaging Solutions (formerly Point Grey Research) exist, but still lock down drivers and control firmware. And you can typically only afford to partner with one or two camera manufacturers, whereas in this case all that overhead is gone and you can just use the device directly.
It's a small market, but if you're in it, this kind of project is a great development.
I think it's meant to compete with cameras like those manufactured by Red, which cost $50,000 for the base model, and a few accessories will put you into six figure territory pretty quickly.
> seems like a long time but that includes the time to look at each shot and delete the crap. They are faster because they know when they take the shot if they missed and they delete without looking at it.
One tip I picked up early (especially for something like sports where if you've missed the shot, the moment is gone) is to just take more shots and not even think about sorting or deleting them during the shoot. I ended up with a lot more usable images that way.
The one exception was when my DSLR mirror broke and I suddenly had to fully manually meter and focus (which I'd done before but wasn't exactly familiar or comfortable with it). That was an interesting shoot! IIRC, I was having to shoot with a narrow aperture (not ideal as it was in a dark room) and compensate with a higher ISO/longer shutter speed, then check the focus after the shot, as I didn't have a working view finder to properly dial in the focus.
Most do so kind-of-poorly, usually for dumb (tax classification and product differentiation) reasons. I switched out of the Canon ecosystem for a few reasons, but the biggest reason I went to Panasonic was video. The GH4 (and to a lesser extent the GH3, both of which I own) punch well above their weight as far as video goes, particularly for live video, but most "DSLR"/consumerish-mirrorless cameras are a good few steps below a dedicated video camera like anything from RED or even the BlackMagic Cinema stuff.
Not sure on the down votes for this comment other than not expanding the conversation too much. So, let me try...
Blackmagic has been making a lot of people very happy, and pissing a lot of others off at the same time. By that, they must be doing something right ;-)
It seems to me that BMD feels that the market is made up of nothing but price-inflated "things". "Things" could be hardware or software. They started with their video cards and other video hardware converter devices, but it was their acquisition of DaVinci and the subsequent release of the Resolve software for free that really started the polarization of opinion. After that, they dove into the camera realm, and that really got people's attention. Granted, their first gen camera left a lot to be desired, but whose first gen anything doesn't? (I'm looking directly at you Nokia Ozo.) Once they had a decent imaging chip, they went after the film scanning realm.
Each of these areas (color correction, cinema cameras, film scanning) are historically all extremely expensive markets to get into. Blackmagic has "disrupted" traditional markets (to use a Si Valley buzzword), but without being a giant frat Bro culture.
I do a mixture of photography and videography and I’d seriously considered the BMD Cinema. I really wanted to go with that device but the form factor left a lot to be desired for hand shooting and the reviews about battery life as well as hard disks over heating were what ultimately steered me away.
This is going to be for people (companies, startups) that want to build on top of it. It's going to accelerate development and/or create opportunities in specific verticals.
I'm not the person you're responding to, but from what I understand it is common in sports photography to be feeding the images to a wire service almost as they are shot (usually by swapping out memory cards multiple times per half/quarter/whatever and passing them off to an editor or runner), so those wire services can be feeding them out to news outlets for distribution during the game, or soon after its conclusion.
This is why you would want to be "chimping" the shots as you shoot them (or just know from experience if you got the shot or not).
Quite different from a more traditional workflow where you take a ton of images, classify them as keep/delete after the fact, spend time punching the keepers up in Photoshop/Lightroom, etc.
It appears to be capable of 4k at 300fps. That requires fast (and a decent amount of) storage to utilise - gigabit Ethernet to a network store is one way of accomplishing that.
It's also useful for remote control -- there are scenarios where there is place to set up a camera, but an operator can't be there as well -- and for automatic downloads (e.g. in a studio setting).
Magic lantern has been around for years, possibly decades. It might as well be a brand name at this point. I have almost no interest in photography and I recognize it and associate it with cameras instantly just from osmosing knowledge about it from the camera nerds around me.
This is interesting, however, in my experience it's the lens that makes the camera. It would be pretty cool to have a body that is 100% open source, but the best lenses (which IMO are Leica and Fujinon lenses) tend to have proprietary mounts and autofocus systems.
If they could also produce a high quality set of prime lenses, that'd be nice. I think the real magic (and difficulty) lies in producing great lenses, which is an entirely analogue process.
As much as this is important, I am convinced Open things are most useful at the lower end, opening doors to people who would otherwise just not be able to participate. I would have a hard time justifying getting one of those for myself, because of the price, as a non-pro.
Still, I hope they succeed.