Hacker News new | past | comments | ask | show | jobs | submit login

It's a great effort but such development is too slow and resource limited to really challenge commercial solutions. They don't get to make their own ASICs (e.g. Canon DIGIC) so they need to use limited and expensive FPGAs; they can't make their own sensors (e.g. RED Dragon) so they need to use CMV12000 which is a sensor for scientific and industrial applications - fast, but not optimized for image quality (from my limited knowledge! I could be wrong). Most likely they won't get access to latest and greatest OEM sensors like Sony BSIs.

By the time it's out, you could buy either a fully integrated solution that's far ahead technically - from the likes of RED, Canon, Blackmagic etc - or if you're after some exotic application you could just use an industrial camera from IDS or Ximea.

Yes the customization and the abilities around this camera could be endless, but it'd take something really special on that end to overcome to hardware limitations coming from the development model and to make it a viable professional tool.




I'm not sure your concerns are completely valid here. We've reached the point where consumer phone cameras are good enough for 90% of photography. At this point there's huge value in giving better access to the platform itself to start applying machine learning techniques directly on the camera.

In addition, the FPGA you call limited I'd argue gives access to powerful reprogrammable logic. Why hard-code image processing algorithms when you can update them as new techniques come out?

Not only is this an amazing sensor (180fps, global shutter) but it's using cutting edge technology. I'd argue that it's possible sensor technology is reaching some of its limits and an industrial sensor here can match the quality of DSLR sensors. It may only be 12MP but Google and Apple have both shown that intelligent algorithms can produce amazing results from sensors that size.

The video on the properties of the sensor itself is quite impressive: http://vimeo.com/17230822


>> At this point there's huge value in giving better access to the platform itself to start applying machine learning techniques directly on the camera.

What does machine learning have to do with photography? If there's an explanation for that, then I'd add why does it need to happen on the camera?


I worry this is going to go the "computer aided photography isn't the one true photography" direction, but check this out:

https://research.googleblog.com/2017/10/portrait-mode-on-pix...

ML has thousands of applications in photography, from HDR to mask creation to feature removal to subtle aids like noise reduction.


I'd agree with parent that ML is one of the killer apps of more open professional camera hardware.

Nowhere I can think of (other than Android, but a lot of caveat there) has an open hardware upstart challenged entrenched commercial players for legacy needs and won.

Where you do win is by targeting emerging needs, especially ones legacy commercial players are ill equipped to take advantage of (due to institutional inertia or ugly tech stacks).

As for why ML on edge devices: a large amount of work is going into running models (or first passes) on edge devices with limited resources (see articles on HN). I would assume they have business reasons.

But offhand, almost everything about vision-based ML in the real world gets better if you can decrease latency and remove high-bandwidth internet connectivity as a requirement.


It relies on the fact that nobody takes really original pictures, hence you can build a large enough data set and apply learned techniques/attributes/behavior to "new" pictures. Computer vision = small standard deviation.


I think the point is that the low-level image sensor functions such as timing, readout, buffering, error correction etc don't change much. Therefore its more cost effective to do it in an ASIC rather than pay a premium for FPGA.


Give it some time - they just launched! If it becomes popular they will have more funding to work with and have the ability to advance the tech. You don't enter a patent briarbush market with the best solution the world has ever seen, you make a good enough showing and grow.


I use Panasonic's midrange GH3 and GH4 cameras for video. 16MP sensor, Micro Four Thirds mount. For general purpose cinema, they will shoot better than the sensor in this camera will (which isn't to say it's the best thing in town, it's not, but that's kind of my point) and the stuff they're flogging, like on-the-fly color conversion, is not particularly useful in the center or edge cases compared to getting high-quality, raw, as-ungraded-as-you-can-get material to work with in post.

It's a neat idea. Your implication that it's good-enough doesn't scan to me, as even a prosumer videographer. Maybe I'll be wrong--but Blackmagic is already vastly ahead in the pure-cinema space, so...


Their competition is not GH3/4. Its other cine cameras. GH3/4 is not even in the same ballpark. 12ish stops compared to 15 on the black magic and apertus. Global shutter vs rolling. It has 300 FPS at 10 bit and 132 FPS at full 4k res! Its camry vs ferrari.


Of course the GH3/4 are not in the same ballpark--but in terms of image quality, they're about where you'd slot this one (what you lose in sensor size you get back in sensor density, at least in the general case). 300FPS is cool if you need it, but that's a pretty rare thing--image quality's a constant need.

The Axiom camera doesn't appear to be in the same ballpark as those other cine cameras, either, which is the disjoint in here for me. It's trying to make its bones on being "hackable" instead of "a great camera". I'm not sure there's that middle ground, unless it happens to come in significantly cheaper than the BlackMagic stuff (which is my next upgrade, they're fantastic).


Well you're dismissing things like on the fly color correction, when pre-grading is an important feature for a lot of productions. High FPS is also not a rare requirement. Sports/Action/explosions/water splashing/hair commercials/etc/etc all benefit from high FPS capture. These are all features that the competition advertises. Things like high dynamic range, a file format that allows for easier post production, etc are real tantgible benefits that they give you. I'm not quite understanding where you're coming from.


But isn't the point of this to be able to hack it and add things that you need for your use case? Nobody said the arduino or raspi was meant to replace your day to day compute. This is targeted as a hackable base system to learn about the technology and stimulate innovation. Right now hacking cameras requires insanely niche knowledge and reverse engineering skills.


If the point is to hack, you totally should buy an Arduino. (I just bought my college-age EE brother one. He's messing with it right next to me. It's cool.) It is also orders of magnitude cheaper.

But for video and video specifically, a "hackable base system" is not a raison d'etre to exist beyond the niche-of-a-niche hacker community who's not going to pay you materially for the thing in the first place. This stuff is expensive! Near as I can tell, something like this needs a reason for videographers, and not computer people, to pay for it to get the traction needed to allow it to expand and progress. The set of people who need to "add things" to a video camera for their use case is small because there are few use cases that are not better undertaken by getting the highest quality, cleanest raw footage you can (which is not helped by adding, like, inline color correction), storing it in the least-lossy format you can, and then taking that output and transforming it down-the-line either in post/editing for recorded media or in your better-equipped visual mixer for live.

The most interesting "hardware hacks" I can think of for cameras would be in the focus/image stabilization arena, but I don't think you get much insight into that with the passive E-mount (could be wrong, I don't use Sony) as opposed to an active E-mount or Panasonic's Power/Mega OIS stuff in the MFT. For Power/Mega OIS in particular, my understanding is that the sensors are in the lens but the brains are in the camera--could be some novel things to do there.


This is opinion, but opinion from a passionate cine nerd:

Good stabilization techniques don’t happen in the lens and passive E mount is perrrfect for this camera.

Stabilization is a mount’s job, and not really a hack. Most camera hacking these days is done with SLRS which are not “real” video cameras, for lack of a list of technical explanations. Magic Lantern is a popular DSLR hacking tool and it’s problematic to use. Think trying to use Windows as if it were Linux.


For a traditional cinema camera, I agree that a passive E mount makes total sense. For a video camera where the focus is "hacking" on it, I disagree, because it reduces the breadth of "hackable" bits of the thing. This gap goes back to my original "I don't know who this is actually for".

I would, however, contest that stabilization is just the job of the mount--optical stabilization relies on the lens, and it's pretty valuable to me. I've broadcast with both Panasonic's Power OIS lenses and non-OIS lenses and the difference is pretty stark. (And given that live broadcasts pretty often just use Panasonic GH's or BlackMagic's Pocket or Micro cameras, it's basically the same tech and thus something to consider.)

Further edit: 'bprater made a good point elsewhere in the thread: an active mount would allow for driving focus through software, too. I don't mean autofocus, but rather the ability to automate focus. For example, I wrote an application to control my video mixer to better be able to do a one-man show and my GH3/GH4 have decent mobile apps for one-man control. Being able to have the camera remember correct focus settings when I'm moving between standard spots in my studio would actually be a pretty useful thing to be able to trigger without getting behind the camera! Which, once more, goes back to "hey, so who is this SSH-capable, 'hackable' camera actually for?".


Overall, I think your point is well-made. The SSH capabilities are something like tone-deaf considering the market. But, I'll answer who the camera is supposed to be for.

It's for me, and other nofilmschool.com readers. But, I'm not going to buy it. I've awkwardly transitioned to using CGI as my medium, but if I were still using camera, I might be considering a purchase of one of these.

It's the latest in a chain of attempts at upending the big companies who lock features on the cameras and have single handledly held back indie film production by miles, for years.

None of these attempts really get off the ground. RED gave the sight of being a savior many years ago, but RED's plan all along was just to bit off the opportunity and ultimately side with the camera nazis.

As for the mount, I never even learned to use focus electronically controlled by the body, or zoom, or stabilization. I would have no use for any of them. Passive E-mount does everything I could conceive of needing.

When somebody who this camera is for wants to control focus wirelessly, they rent a Preston system or one of the newer ones like a Lenzhound.


In the interests of accuracy, the Axiom project has been in gestation for >5 years so they're off to a slow start. But they have a lot of good will in the pro cam community because they were careful to keep delivery expectations low and warned people not to structure projects around the hardware any time soon.


Latest and greatest is severely overrated - that's something you throw money at if you already have a hot project and need additional hype. Sometimes it works out, sometimes not (eg Collateral was a very successful ad for the Viper camera's relatively good low-light performance; Public Enemy not so much for a different model).

I've been following this project for a few years and while it's got a slow development/adoption curve it's on the right track both economically and technologically; eventually it'll become the Linux of cameras. The thing is, we're already in an era of technological abundance where digital photography is concerned; commodity solutions are already good Enough for pretty much all commercial purposes. You can shoot an indie feature on an iPhone; even huge budget films like the Avengers franchise use Blackmagic pocket cameras as crash cams (cheapish cameras that you set up to run automatically stunt cars to provide a second or two of dramatic first-person footage).

There will of course always be more highly engineered offerings out there, that will be important for highly specialized tasks like astronomy and scientific work. And ultradense sensors might be deployed to allow photography at multiple focal depths - but even that is likely to find its way into consumer gear before lone. At this point the industry focus is no longer on which camera has the best specifications as on workflow and support. Blackmagic ave done incredibly well for themselves with a (relatively) technologically inferior sensor/package but by being more open than Red and thus building a larger and stronger community at a somewhat lower price point, for example. It's a good example of the tortoise beating the hare over the length of the race.


Yup, agreed.

You're also giving up a huge compromise in lenses when going with this approach. When it comes to photography lenses mean more than the camera 95% of the time(unless you're doing sports/action photography, where you need good AF).

With this it's all manual + adapters which always has compromises compared to use in their native format.

It's super-cool technically but I don't see it taking over anytime soon.


I think you might have missed the part where this is a digital cinema camera. Cinema cameras rarely have AF, and are almost always manual lenses with a dedicated person assigned to pull focus.

Also, being able to adapt lenses is a huge bonus. Look at the Micro Four Thirds market. There are so many adapters available and in wide use to be able use a wider selection of lenses. Pretty much the only way Sony cameras are used. The selection of Sony native mount lenses limited, but adapt that to a PL mount or EF mount, and the world opens up significantly


Ah, good call. It was missing from the title so I was assuming they were going after the traditional camera market. For cinema they might have a better shot.

I've got a non-trivial amount of Canon gear(300F4, 35F1.4, 135F2, etc) and looked into the Sony route since their A7 line looks killer. In the end all the adapters had compromises so I decided to stick with Canon. From all reports the AF drive on the adapters still have a bunch of edge cases.


> and looked into the Sony route since their A7 line looks killer

They are. I recently tried out an A7S, nice thing, but getting the A7S2 in about two months. 4K in camera recording is worth getting the Mk2 for.


The pixels on the A7Sii are larger than other similarly sized sensors. That's why the megapixel count is lower. Larger pixel size means more photons absorbed in same amount of time. I've also had one connected directly to a telescope. It was an extremely fun night as I could use the Live Mode to see what the telescope was slewing to without taking a 15+ second exposure. It was pretty amazing. I want one for this specific purpose alone.

However, as someone that has shot a lot of video footage with no light other than a full moon using the A7Sii, I highly suggest getting the cable connected remotes for this camera. Trying to fly this body without a remote on a shoulder mount with lenses, follow focus, monitors, etc is brutal. The start/stop button is so tiny and hard to press, it makes you want to throw the entire rig as far as you can. The button is more along the lines of a reset button that you need a small pin to press.

It's pain points like this that make a stills camera that can shoot video much different than a true digital cinema camera


Thanks for the suggestion with the remote, I'll keep it in mind. Can you recommend a specific one?


I do wonder about the lack of any autofocus system. Maybe that feature doesn't matter as much for a cinema camera. There are certainly some lens mounts with autofocus control interfaces that are old enough to no longer be patented, so it seems like they could at least do contrast detection AF.


There hasn't been a feature film shot in the past hundred years with auto-focus.


One of the roots of the axiom is elphel, a company that's been selling open hardware cameras for more than 15 years with similar features.

http://elphel.org

The Axiom angle is more cinematographically oriented.


The Chronos high-speed camera[1] is a lot more interesting as a product, though not fully open-source. The software is FOSS, the hardware isn't. It has the advantage of being for a niche application (high-speed photography) and substantially cheaper (about 0.1x the price) than similarly capable commercial cameras.

[1] http://www.krontech.ca/


> they can't make their own sensors (e.g. RED Dragon) so they need to use CMV12000 which is a sensor for scientific and industrial applications - fast, but not optimized for image quality

> from the likes of RED, Canon, Blackmagic etc

Guess which sensor in in some of BlackMagic's offerings?


I think they rely heavily on customized Fairchild sensors for their Ursa line: http://image-sensors-world.blogspot.com/2015/04/blackmagic-s... (great blog BTW!)

I'm not aware of any pro cinema camera using the CMV12000 as-is, but I've been out of the loop for some time.


Image sensors world is a great blog indeed.

I'm pretty sure the Blackmagic used CMOSIS sensors at least on the 4K Production Camera.

On Axiom - I think it's great to have open source cameras like this, but the price point needs to come down significantly before it will get traction. Also, with indie camera companies expect long delays - Axiom's been years in the making, as have KineFinity cameras (another independent RAW cinema camera brand).

From a practical POV for indie filmmakers, the resale value of these cameras is also questionable when compared to BlackMagic or RED.

I think much of the benefits of RAW (dynamic range) will actually come from 10bit HEVC cameras shooting HDR. GH5 is one of the first of those, in 1-2 years I expect every smartphone will shoot 10bit HEVC that will have significantly more latitude than current 8bit video, even at small sensor sizes.

I would actually love to see mobile SoC's like Qualcomm 845 (4K 60fps, 10bit HDR, Rec 2020 color gamut) and Android used on semi/pro cameras, with a good OLED touchscreen, relatively large (1" or larger) sensor and maybe m43 mount lenses. Something like BMPCC but with updated sensor and chipset, running Android.


Stuff like that seems like it would be nice for people who like to hack around with internals, but you’re talking about a tiny tiny fraction of users.


Like most things methinks...but the majority non-hacking users get to benefit from the ones who do like to hack around with internals.

Plus its (user) hardware/software upgradeable, some fancy new thing comes out you just have to bust out the toolkit instead of having to buy a whole new camera.

I'm thinking something like a light-field sensor with the "LSD simulation" firmware would be a perfectly sensible upgrade.


That's what everyone said about Linux in 1993.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: