Hacker News new | past | comments | ask | show | jobs | submit login

So I do photography as a hobby as well as some semi professional stuff (I shoot for the pros at hockey tournaments, I'm not as good as they are but close, they get keepers every 13 seconds I'm more like 20-25 seconds; seems like a long time but that includes the time to look at each shot and delete the crap. They are faster because they know when they take the shot if they missed and they delete without looking at it. I'm not that good, I have to look.)

I shoot hockey with a Canon 1DX II, before I got that body I used a 5DIII. If I'm shooting for me, I use a Canon 200mm f2; if I'm shooting for the pros, they want more like a 300mm so I add a 1.4x teleconverter.

For the people who don't know what any of that means, this might help: the 1DX II is Canon's best sports body, it retails for about $6000. The 200mm + the 1.4x is another $6000.

So truth in advertising, I've got a lot invested in my current kit (not just that stuff, I have a number of other Canon lenses, some Sigma, Rokinon). So perhaps I'm not objective.

All that said, I don't get this camera at all. No auto focus, no viewfinder, those are complete deal breakers for me (electronic viewfinder doesn't count unless it is 100% as fast as a normal view finder. I'm timing shots so I get the puck going into the net, that means a lag as small as a few milliseconds screws me up. And when I say "me" I really mean any sports photographer, or action photographer where the work flow doesn't allow you to use burst to hopefully get the right shot by accident).

It looks like lots of cool technology but I'm definitely not interested in owning one.

So who is interested in owning one and what would you do with it? Where does this camera shine?




I'm a software engineer and hobbyist cinematographer. This camera is meant for people like me.

The reason for a camera like this is to open the tool set up for active development, in a way that traditional camera-makers haven't opened their hardware up for access.

Here's an example: high-ISO shooting. What makes it possible for cameras like Sony's a7s series to shoot in nearly dark conditions? Is it the sensor or have the engineers leveraged the fast CPUs to do real-time noise reduction?

By giving engineers access to the hardware, we could start exploring high-ISO programming. Similarly, we might learn how to auto-calibrate lenses in new ways that could take the cheap 'nifty-fifty' lens, apply machine learning and have it perform like a $3k Zeiss lens.

Even a topic like color science, the holy grail of company's knowledge base like Canon or Alexa, could be explored by a wider audience of scientists and engineers. Until we can get our hands on the hardware thru code, most of this is nearly impossible, except projects like Magic Lantern.


OK, that makes a ton more sense. And I couldn't agree more that the camera companies are shooting themselves in the foot by not opening up their firmware. I'm a software guy too and there are changes I'd love to make to Canon's menu system (I really want to be able to map a button to any part of the menu system, in particular a button that got me to the in camera crop feature that the 5DIV and 1DX have).

I suppose they think they have secret sauce buried in there but by keeping it secret they aren't getting any patches from us hackers.


> What makes it possible for cameras like Sony's a7s series to shoot in nearly dark conditions?

Mostly because the A7S(2) has a 35mm sensor with "only" 12 MP - it's simple physics, the individual pixels are so huge compared to 50 MP+ cameras and thus much less susceptible to noise.

In addition the A7S line, during 4k recording, does not do binning or other quality degrading post processing (because its resolution is so "low" that 1:1 4k can be done). This reduces processing load as well.


I think this is a video camera that’s supposed to compete with something like RED.


We are interested a lot in this.

For computer vision cars-vehicle automatic driving. We want to preprocess data in real time with the information provided by sensors(vehicle inertial sensors, tacometers and stuff). What is called "Sensor fusion", something very similar to what humans(or any animal) does.

We already use commercial cameras, but the ability to integrate directly our hardware(our own FPGAS, in the future ASICs) with cameras low level is simply very tempting. In software we use Linux for the same reason, there is no way you could integrate that much with proprietary software.

We need real raw data. Not raw data already preprocessed by the manufacturer, and total control over it, specially lighting. A vehicle is moving and if the preprocessor could not handle the change of lighting entering-exiting a tunnel fast enough on a bright day people could die. You need total control and stability and repetitivity.

Without this we will have to design all by ourselves, which is very expensive.

It looks like they are focusing on cinema so we are not sure about this, but it could be a very interesting possibility to explore.


Exactly. For a custom imaging pipeline, constructed via application-specific design, every element that is a black box invalidates the measurement aspect of the output. The sensor control system is an intimate part of that system. For example, machine vision cameras typically allow alteration of imaging parameters, but at the cost of lost time meaning dropped frames. So rapid iteration to convergence, or rapid cycling thorough parameter settings, means that the device might go down to 5fps, or 1fps. Magic Lantern (collaborators with AXIOM) corrected exactly this kind of defect for certain Cannon DSLRs, among many other things.

It is possible to partner with sensor or camera makers, in order to get the required specs and level of integration, but it is extraordinarily expensive. And only a handful of companies can do this. For the individual, it's out of the question to partner with a camera maker. So for the interested individual, AXIOM provides a real benefit. And for commercial development, companies like FLIR Integrated Imaging Solutions (formerly Point Grey Research) exist, but still lock down drivers and control firmware. And you can typically only afford to partner with one or two camera manufacturers, whereas in this case all that overhead is gone and you can just use the device directly.

It's a small market, but if you're in it, this kind of project is a great development.



I think it's meant to compete with cameras like those manufactured by Red, which cost $50,000 for the base model, and a few accessories will put you into six figure territory pretty quickly.


My understanding is this is more for cinematography vs stills.


> seems like a long time but that includes the time to look at each shot and delete the crap. They are faster because they know when they take the shot if they missed and they delete without looking at it.

One tip I picked up early (especially for something like sports where if you've missed the shot, the moment is gone) is to just take more shots and not even think about sorting or deleting them during the shoot. I ended up with a lot more usable images that way.

The one exception was when my DSLR mirror broke and I suddenly had to fully manually meter and focus (which I'd done before but wasn't exactly familiar or comfortable with it). That was an interesting shoot! IIRC, I was having to shoot with a narrow aperture (not ideal as it was in a dark room) and compensate with a higher ISO/longer shutter speed, then check the focus after the shot, as I didn't have a working view finder to properly dial in the focus.


This is not a DSLR, it's a video camera.


Don't all modern DSLR shoot video?


Most do so kind-of-poorly, usually for dumb (tax classification and product differentiation) reasons. I switched out of the Canon ecosystem for a few reasons, but the biggest reason I went to Panasonic was video. The GH4 (and to a lesser extent the GH3, both of which I own) punch well above their weight as far as video goes, particularly for live video, but most "DSLR"/consumerish-mirrorless cameras are a good few steps below a dedicated video camera like anything from RED or even the BlackMagic Cinema stuff.


I love the Blackmagic gear!


Not sure on the down votes for this comment other than not expanding the conversation too much. So, let me try...

Blackmagic has been making a lot of people very happy, and pissing a lot of others off at the same time. By that, they must be doing something right ;-)

It seems to me that BMD feels that the market is made up of nothing but price-inflated "things". "Things" could be hardware or software. They started with their video cards and other video hardware converter devices, but it was their acquisition of DaVinci and the subsequent release of the Resolve software for free that really started the polarization of opinion. After that, they dove into the camera realm, and that really got people's attention. Granted, their first gen camera left a lot to be desired, but whose first gen anything doesn't? (I'm looking directly at you Nokia Ozo.) Once they had a decent imaging chip, they went after the film scanning realm.

Each of these areas (color correction, cinema cameras, film scanning) are historically all extremely expensive markets to get into. Blackmagic has "disrupted" traditional markets (to use a Si Valley buzzword), but without being a giant frat Bro culture.

TLDR: I love the Blackmagic gear too!


I do a mixture of photography and videography and I’d seriously considered the BMD Cinema. I really wanted to go with that device but the form factor left a lot to be desired for hand shooting and the reviews about battery life as well as hard disks over heating were what ultimately steered me away.

http://forum.blackmagicdesign.com/viewtopic.php?t=59032

Looks like the original cameras, minus the pocket, are now EOL.


Technically yes, but it's not as practical as the high end video cameras mentioned above.


This is going to be for people (companies, startups) that want to build on top of it. It's going to accelerate development and/or create opportunities in specific verticals.


Huh. Why chimp during the shoot, instead of having more time to pay attention and reviewing shots later?


I'm not the person you're responding to, but from what I understand it is common in sports photography to be feeding the images to a wire service almost as they are shot (usually by swapping out memory cards multiple times per half/quarter/whatever and passing them off to an editor or runner), so those wire services can be feeding them out to news outlets for distribution during the game, or soon after its conclusion.

This is why you would want to be "chimping" the shots as you shoot them (or just know from experience if you got the shot or not).

Quite different from a more traditional workflow where you take a ton of images, classify them as keep/delete after the fact, spend time punching the keepers up in Photoshop/Lightroom, etc.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: