Hacker News new | past | comments | ask | show | jobs | submit login
Low Light and High Dynamic Range photography in the Google Camera App (googleresearch.blogspot.com)
175 points by jmintz on Oct 27, 2014 | hide | past | favorite | 112 comments



I've been messing around with the Google Camera app and the new Camera API provided by Android 5. It turns out that you can actually obtain a lot higher quality images by using the DNG (digital negative) of the photo, instead of using the JPEG. By "HDRing" images yourself, you can actually outperform Google Camera's HDR+ functionality.

Here's the JPEG, non HDR+ shot on a Nexus 5: http://i.imgur.com/So44muL.jpg

Here's a similar image shot with HDR+: http://i.imgur.com/QFS3ZYd.jpg.

As you can see, the dynamic range is increased greatly; however there's strange black spots in the shadows.

Here's the same photo that I took in DNG format, edited in Lightroom:

http://i.imgur.com/VRFsnf5.jpg

And here's my HDR photo, combined 5 DNG exposures inside Photoshop HDR Pro's functionality:

http://i.imgur.com/RTT6ULz.jpg


Yes, raw files have quite the dynamic range and the current generation of raw processors can really pull a lot of detail out of the shadows and highlights.

The strange black dots you find in the shadows are noise. Simple no data to produce any results.

The key part to in camera HDR is, automatic. No manually fixing your camera to tripod, copying them onto computer and the loading into software capable of HDR and finally manually processing them.


Wow thats quite interesting, i think ill try that this weekend, see if i can setup an automated solution too if i can get it working smoothly.


I didn't know it was possible to save real DNGs on Android (as opposed to JPEGs inside DNGs). I've been waiting for years to be able to shoot RAW on a phone camera.

Do you mind posting a DNG file of this photo? I might switch from iOS if Apple doesn't provide this functionality soon enough.


Sure! I would _not_ advise getting the Nexus 5. As noted below in other comments, the image quality and autofocusing speed has much to be desired. The Camera2 API that supports DNG is only available on Android 5.0 phones.

http://edward.io/raw_nexus_5.zip


In other words get a Nexus 6, or wait a phone that will get Android 5/Lollipop in the next few months and has a great camera sensor already (Xperia Z3/Z3 Compact, Galaxy Note 4) - or wait for Xperia Z4, Galaxy S6 or HTC M9 in spring.


Could Google's HDR+ also use DNG instead of jpeg and combine those automatically? That would make it much slower, though, so they'd have to take 3 at most.


There's nothing technical that's stopping them from doing it. I expect third party apps to do something similar once Lollipop has a higher market share.


HDR+ does the processing immediately after taking the images. I would hope that the "HDR" step happens before any compression.


Is this handheld or mounted somehow? That last image looks great, but I thought HDR Pro needed it to be stable between shots.


All hand held, shot from 1/35th of a second to 1/1000th of a second. HDR Pro does some basic auto aligning and ghost removal, AFAIK.


Seems like two people come into this experience and walk away with completely different expectations. One group are people, for whom camera phones have been their only "camera" and they are amazed at how great the pictures are now. The other group are people who argue the details of a phase detection autofocus in the Canon DSLR and the 51 point field autofocus of the Nikon DSLR and can't believe that the handset makers are allowed to actually call these things 'cameras' in their advertisements. Very little input from the folks in the middle.


> "The other group are people who argue the details of a phase detection autofocus in the Canon DSLR and the 51 point field autofocus of the Nikon DSLR and can't believe that the handset makers are allowed to actually call these things 'cameras' in their advertisements."

This demographic can be safely ignored. The people who get the most up in arms about the "purity" of photographic tools are also the ones producing the least work. They're the ones who buy $10,000 worth of bodies and lenses but can never go beyond photos of their local park, or sharpness test charts in their basement. This group isn't good for much more than vociferous, highly-technical religion wars on the Internet. We'll start caring what they think when they start producing work.

In the mean time there are many passionate photographers out there producing great work, with a variety tools, cheap to expensive, simple to complex.

This guy with a crappy old iPhone 3GS has been taking better photos (and publishing them) than the bulk of people with 5D3's and 70-200 f/2.8's: http://boingboing.net/2009/10/29/photographer-takes-p-1.html


Really, I don't see anything good in his photos. All these post-processed photos lack a lot of detail, even from a smartphone standards.


And this is part of the problem - it was never about detail, or more generally technical perfection. But yet that's where the conversation starts and stops. The conversation around cameras focuses on noise level, dynamic range, or lens sharpness - none of which are particularly critical traits for producing great photography.

The Tank Man[1] photograph would not pass muster even by low-end cell phone standards today, but it's as powerful now as when it was shot.

Nor Cartier-Bresson's famous "leaping man"[2] - more dynamic range, more sharpness, less noise, less grain, would not have made the image any better.

Even moving into modern times, Bruce Gilden didn't need perfect sharpness or dynamic range to document the yakuza from the inside[3]. Not only did he not fuss over focus points and phase-detect vs. contrast-detect autofocus, he didn't even have autofocus!

Like Weegee said: "f/8 and be there". It's about the picture, not the gear. You only care about the gear insofar as it enables you - and nearly all cameras (including cell phone cameras) are well past the point of enabling.

And this is the problem with the "argues about gear on the internet" demographic - they don't produce. They spend a lot of money and time acquiring, testing, and verifying the technical perfection of their gear, and too little time actually photographing. The best you get out of this group are is technically-perfect banality.

[1] http://iconicphotos.files.wordpress.com/2009/04/030.jpg

[2] http://www.dienes-and-dienes.com/Assets/CBManLeaping.jpg

[3] http://3.bp.blogspot.com/--vyBAjPjz6U/TkgF6Qla38I/AAAAAAAAJA...


All other things equal, better equipment will allow you to get better shots. You are aware that the iconic Tank Man photograph was shot from half a mile away. It would have been impossible to shoot with today's camera phones.

I know that when I shoot my banal pictures of banal life, I prefer pictures where the focus isn't accidentally on the background or where the subject hasn't half-exited the frame because of a delay between pressing the button and the shot going off.

Most of us are not in the habit of documenting the Yakuza or happening to be present in world-changing events. We just want to take pictures that look good, even if the subject is banal.

Edit (since I'm not allowed to comment on the post):

> This is the part we disagree. Better knowledge will allow you to get better shots

We're not in disagreement. "All other things equal" means just that - all other things being equal - including knowledge. Of course a bad picture with perfect focus is still a bad picture. Likewise, a good picture can become even better if it's technically accomplished. Tank Man is an iconic photograph solely because of the subject matter. Ansel Adams didn't settle for a dinky rangefinder when heading out into Yosemite. He carried large heavy equipment because it would allow him to capture the detail and sharpness he wanted.


> "All other things equal, better equipment will allow you to get better shots."

This is the part we disagree. Better knowledge will allow you to get better shots - we've gone past the point long ago where improving technical capability made dramatic improvements to how well people can photograph.

Ultimately what makes or breaks a photograph isn't sharpness, or even focus in particular, it's exposure and composition. That shite picture isn't made any less shite because the focus is bang-on. Likewise, a well-composed image survives a great deal of mis-focus, blur, or other technical faults.

If an image is shit because the focus was off, I'd hate to say it, but it wouldn't have been an excellent image even if the focus was on.

Short of extremely equipment-demanding niches (like macro, or sports) the problem is practically always with the photographer, not their equipment. The photographer is the most common bottleneck in creating great images - of any subject, banal or world-changing. Spend money on education, not more gear - and more importantly, spend time.

Technological advancements will give us much-appreciated conveniences, it won't make you a good shooter when you weren't before.

Side note: this is why I'm a fan of things like iPhonenography classes, as much as people like to mock it. Ultimately putting a camera into everyone's pockets has been great for photography and expression, and elevating the quality of this stuff (whether intended as art or just personal enjoyment) involves education, not gear.


I'm inclined to agree with you, but a bunch of common counterexamples immediately spring to mind. Very often, sorting through a huge amount of low quality phone camera pics from a party or something, I encounter a "happy accident" that would have made a great photo if only: it had been properly in focus, not blown out by light, less noisy (especially when you'd normally crop the pic, phone camera grain is way uglier than film grain), or any of those things that'd a medium-quality camera suffers way less from.

Am I wrong? Is the blurry picture possibly just as great (even if you can't recognize my friends' face?), or would it have turned out to be a shitty picture after all, obscured by the lack of focus?


Would you have got the details right and taken that particular photo at all if you were paying more attention to each shot? I don't think this has much to do with the camera - if the picture's blurry, you were likely taking it in a hurry, and you would've taken it in a hurry if you had a real camera.


> And this is part of the problem - it was never about detail, or more generally technical perfection.

While I agree with you for the most part, and never argue about gear (other than having been a fan of Bibble before they were bought).. there is one thing I really miss from your example photos, and from smartphone photography, and that is something being out of focus. Now, I'm not a great photographer, I just take snapshots and then edit them badly, but for example this I like: http://a.sandboxx.org/johann/376/

I wouldn't be surprised if that will one day be possible with very compact lenses, or with multiple ones plus software or whatever, but for the time being, at least for some shots, you need focal length period. Candid and news photography is a lot different from, say, wedding and product photography. And if an algorithm allows better handheld photos, it also makes tripod photos even better, so sometimes there isn't even a gap being closed, it all just gets shifted.

> nearly all cameras (including cell phone cameras) are well past the point of enabling.

During that same vacation, while sitting at the beach in the morning, I noticed a fish jumping out of the water. Using my 1337 video game target leading skills, I moved the viewfinder across the ocean surface where I expected the fish to be, and was able to take two photos of it, one with two ladies having a chat while doing their morning swim. Again, not a great photo, but for me as the person who sat there, it's a nice memory, something that will always make me smile http://a.sandboxx.org/johann/241/ And it's still totally a snapshot, I just sat at the beach, watching sea gulls, smoking cigarettes. Candid ocean photography, if you will ^^ Of course I got lucky, I wouldn't have had time to change lenses; but still, cell phones cameras are not "well past" being able to reach everywhere.

I'm really happy for anyone who makes photos, and I think you are right that dismissing them on technical grounds is silly. A good photo does not need to justify how or why it was done, and when it comes to once-in-a-lifetime moments, even a drawing from memory is better than nothing. But just like you can make a useful website with just HTML and no CSS, and a very pretty and functional one without Javascript, doesn't mean these tools don't have their uses, right? What's more, sometimes a website that works fine without them, would work even better with them, and not everybody who cares about progress and performance is not getting anything done - those things are orthogonal. It can't just be easily generalized.


Check any photo from any of the great 35mm photographers of the past century and let me know how much detail you see and how much that detracts from the photo's value (not talking about monetary value of course).


yeah I mean really: http://www.boingboing.net/filesroot/alleyway.jpg

It's not what I would call pro by almost any definition. There is the 3x3 grid alignment, but dat fuzzy focus, those colors and the overblown lantern...

It's a fine personal snapshot, but to call it "pro" is weak.


Detail is not necessarily what one should be looking for in a photo.


Sure, that was just one of the things it lacked as compared to the DSLRs. I'm not sure what's so amazing in these photographs (as compared to DSLRs) - to my untrained eyes they don't look great.


You are still looking at it from a purely technical perspective. In the article, the photographer already mentioned that he's only using the iPhone for the "composition and the perfect photo opp".

The point of the parent, I think, is that in the hands of a competent photographer, any adequate camera can produce works of certain artistic merit. Those who are most up in arms about the technical aspects of a camera often ignore the artistic aspects of a photograph.

Here is a video of how an awarding-winning cinematographer (Phillip Bloom) films a short video using a low-resolution Barbie Doll camera: https://www.youtube.com/watch?v=9VS3C183G8g

All technical aspects of the Barbie Doll camera are dismal, but the end product, when viewed as a whole, is not something that an Average Joe can produce, even if equipped with a RED ONE camera.


Guitarists have the same arguments (like audiophiles) about warmness and tone of their work. But the fact of the matter is: if you don't practice enough to reliably produce a chord of your choice, does it really matter if you're plunking on a Squier Strat or one of Hendrix's original guitars? Not really. Equipment bragging rights don't mean anything without a modicum of practice and talent.


Interestingly, he seems to use multiple cameras. For example, the alley shot as taken with a Lumix GF1: http://www.flickr.com/photos/sasurau/3944365957/ - personally I find the feeling of the Lumix one far more emotive in this case.

I totally agree that it's not the price of your equipment though, it's what you do with it - most of the time. Photography has a great many reasons to occur and there are aspects which most definitely benefit from higher quality equipment. For example, nature macro photography (some beautiful examples from the same photographer above: https://www.flickr.com/photos/sasurau/sets/72157631002090680... ). I guess the more you are looking for "The Decisive Moment", the less raw quality matters. Although, again, something like studio photography is likely to be a balance of quality and decisive moments.

Ultimately, unless you're getting most of your income from the photographs, just make sure however you're doing it you're having fun and enjoying it :)


Mostly agree. Under the right conditions/use cases (and in the right hands of course) phones can take pretty nice photos--though I can't say most of those in the link you provided do a whole lot for me. For the majority of users, today's cameraphones are better than whatever mass market point and shoot they were using going back to the Instamatic days. With the exception of a few niches, there's little reason to buy a dedicated camera these days unless you're going interchangeable lens and learn how to use it--and make the effort to bring it with you.

And there is a generally annoying group of online gearheads who spend a lot more time obsessing about camera specs than they do going out and taking photos.


Looks like he has upgraded to a 5s...

http://sasurau.squarespace.com/about/

It is a pity, there is something magical about the lomography look of a beat up 3Gs.


Bullshit.

Features like megapixels, 51 phase detection AF points, face detection metering and other goodies all allow real people to capture better, more memorable images than cell phone cameras.

1. https://www.flickr.com/photos/jemfinch/8594571664/in/set-721... could not have been captured with a cell phone camera. I was holding my DSLR in my offhand and pushing him on the swing with my other hand.

2. https://www.flickr.com/photos/jemfinch/9326415006/in/set-721... could not have been captured with a cell phone either. My wife was holding him and he was peeking out over her shoulder then switching to the other shoulder immediately when I brought the camera up. With a cell phone's shutter lag, he'd have been on the other side before the picture was taken.

3. https://www.flickr.com/photos/jemfinch/8582319362/in/set-721... no cell phone camera in the world could have captured this in such low light so successfully.

4. https://www.flickr.com/photos/jemfinch/9012492563/in/set-721... there are a hundred fleeting moments a day like this, and no cell phone in the world is fast enough to capture them.

It's easy to take a picture of an abandoned alley or cats lying on blacktop with a cellphone camera. Try doing that with a newborn's first smile, or a mischievous preschooler's momentary smirk.


I think you are right, particularly since your focus is (at least partly) on capturing your kid for posterity rather than making impressionistic or abstract work.

There is, however, a marked tendency in photography to obsess about the technical. In the recent series of books "100 ideas that changed..." that also covers fields like art, graphic design, fashion, architecture and film, the photography book ("100 ideas that changed photography") is notable for its focus on technical developments. The other creative fields have their idiosyncrasies, and each book has a different editor, so this is hardly a well-designed study, but photography stands out as the most concerned with the purely technical. That seems to be just the way it is.

Nobody would suggest that wobbliness or "looking like its about to fall down" are good ideas for architecture, but for some reason people coming from the art world like the idea of messing with photography's technical perfection. Maybe it's a personality thing.


Way to miss the point, which wasn't "a cell phone can do everything as well as a DSLR", which is objectively false, but rather that "what makes a great image has very little to do with technical ability".

I was going to do a point-by-point rebuttal on how every picture you've brought up can be taken with a cell phone camera, but that seems a wee bit pointless.

Instead look at those photographs and tell me what exactly makes them worthwhile. Is it the creamy smooth bokeh? The pin-sharp focus? The incredible low-light performance?

Or is it the fact that it's your child? Is it the human element - capturing someone in a moment of happiness, of vulnerability, or of peace?

This is the whole point of my rant before - all of this talk about sharpness, noise levels, autofocus points, etc etc, does nothing for what's in your image, and in the end that's what counts most (and what most people are worst at). It's light, it's subject, it's composition. Everything else is details - a compact point'n'shoot wouldn't give as much depth of field in photo #1, but who cares? A cell phone camera would have had a tad more noise in photo #3... but who cares?

No one. Except fellow pixel-peepers who get off on technical perfectionism. None of your relatives will even begin to think "wow, look at the smooth tones even in the shadows!", nor will you after a few weeks. I'm willing to bet that in a year or two, when your son is more grown, and you come back to this photo, you will not even begin to think about the technical aspects of it. Photography is so much bigger, more powerful than this.

A good photograph is 95% subject, composition, and light, and 5% technical mastery. It's nice to nail the last 5% sure, but there is altogether way too much noise being made about the 5% to the deafening silence about the 95%.

I'm willing to bet that if you took two identical novice photographers and gave them each $1,000, one who spends it on classes and books, and the other who spends it on gear, that the gearhead will have shown the least improvement in actual images (by a wide, wide margin) than the one who spent it gaining knowledge. This scales too - I'm willing to bet the effect is even more pronounced if you gave them each $10,000 instead.

The bottleneck, as always, is not the gear. It's the person behind it.


No, I didn't miss the point. Your point is wrong. You even (unwittingly) argue against it in another post of yours. Cartier-Bresson's "Man Leaping" could not have been captured with a cell phone camera. The shutter lag is just too bad. As another poster pointed out, "Tank Man" was taken from a half mile away.

> Instead look at those photographs and tell me what exactly makes them worthwhile. Is it the creamy smooth bokeh? The pin-sharp focus? The incredible low-light performance?

It's all of the above. When you remember events, you're not actually remembering the event itself: you're remembering the last time you remembered the event. The creamy smooth bokeh, the pin-sharp focus, etc. help me remember these events with more clarity and appreciation than I could otherwise. Every time you make the claim that technical quality doesn't matter, you're ignoring very fundamental cognitive science to the contrary.

You're claiming that "it's light, it's subject, it's composition", and while these are necessary conditions to capture good pictures, for millions upon millions of historic and sentimental potential pictures, they're not sufficient: better equipment leads to many, many better pictures than cell phones can provide.


"Light" is one of the reasons cheap SLR cameras are massively better than cell phones.


Cartier-Bresson's "Man Leaping" could not have been captured with a cell phone camera. The shutter lag is just too bad.

Humorously, he didn't even see the man leaping, and wasn't timing for that. He took some shots, and it turned out to be a keeper. Such is exactly the nature of smartphone shots, as an aside -- people take countless shots, and among them some gems appear.

for millions upon millions of historic and sentimental potential pictures

This is where your argument falls apart. A modern smartphone has better image capture than SLRs -- top of the line SLRs --from just ten years ago. Despite your focus on shutter lag (where you can prefocus/expose on a smartphone just like an SLR, and then have a close to instant shutter), the shutter lag on current smartphones is, again, similar to top of the line SLRs of a few years ago.

With a large sensor and lenses, at any given time you'll always yield a better result (presuming you assume that such is always available when the moment arises). However at the core of photography, the technical side is far less important than it is held to be, and if we really need to bring up history, a modern smartphone was better than what the most committed photographer had in their arsenal not that long ago.


Your pictures hold importance to you for obvious reason, but your impression that they are the result of some sort of technical perfection of hardware is absurd. I see very similar photos, by the dozens if not hundreds, every day on Facebook. By people taking the photos with their iPhone, GS5, or whatever else they happen to have. Those "fleeting moments" when, exactly because they are fleeting moments, they grabbed what they have on their body 100% of the day.

I feel like this is a debate from 2008, discussing the camera on your new Blackberry.


As someone who has both a DSLR and a smartphone. The smartphones are pretty good and "the camera you have with you" is always better than the camera at your home. Plus the wifi connectivity and abitliy to push photos real time is really nice.

The DSLR pictures when I do get around to loading and editing are better. It just takes more time.

As for focus points, I have this wierd camera the canon 70d. Every "pixel" on the camera can phase detect. But I don't think the camera is making full use of the sensor. I wish I could write software for it.

An overview of camera focusing: http://www.dpreview.com/reviews/canon-eos-70d/3


Use some Magic Lantern build of the firmware in this case.


There's no Magic Lantern port for the 70D currently.

There has supposedly been work on a port for it over the past year, but there haven't been any releases of it.


why would there be? they are content already. I think it's dangerous to bucket people in this way, i know plenty of people with $30k+ gear and use their iPhone most of the time. Those two categories you have are not mutually exclusive.

Autofocus in the iPhone is much faster than the majority of Canon (or Nikon) DSLRs, regardless of the number of focus points. Partly because it's a lot easier to focus a lens that has nearly infinite depth of field! Most photographers are also gear heads, which is another dangerous thing to say, just like saying (quite correctly) that most serial killers are keen photographers. which true, it doesn't necessarily add much to the conversation.

why we have this obsession with resolution is largely because of computers. In the "old days" I would make contact sheets and then prints, mainly 8x8" (from a Hasselblad, pre instagram-square-hipness), and they would (and do) look stunning, to use object oriented speak, a fiber print IS-A photograph, where as a digital print HAS-A photograph on the paper surface. They are different, when silver crystals are in (vs on) the medium it becomes something altogether different. In these modern enlightened times we put a photo up on a retina big screen and stare at it from 2' and of course we're going to see imperfections, this creates a cycle of obsession with sharpness. But of course it doesn't matter, would a painting be any better if it was created with a smaller brush? This is all a very classical philosophical debate, it's the medium is the message all over again.

From my perspective this work is impressive, does it make better photos? No, but it does make the photos more representative of what the taker was looking at. Over time we are subtracting the technology component from photography, eventually we will have the ability to record easily any visual experience we have. For me this comes from having enough experience in using equipment and eeking the most out of it, but in contrast my daughter is frustrated by cameras (both her Canon DSLR and her iPhone), she will see something like the solar eclipse and wonder why simply pointing the camera at it won't bring about a photo that looks the same as the one that the Mt Hamilton took on their 36" reflecting telescope, or why a photo at her friends candle lit birthday party didn't look how she remembered it. Her generation will grow up with sufficiently advanced technology that a lot of the inconvenience of photography's technical side has been automated. Just as I grew up in a time when emulsions were capable of exposing in a few hundredth's of a second rather than minutes.


The Galaxy S5 and iPhone 6 both feature on sensor phase detection. Further it's worth nothing that the whole notion of "n points of autofocus" is a flaw of phase detection, not a feature -- contrast detection has millions of possible points of autofocus. And indeed, most high-end SLRs have contrast-detection modes (e.g. Canon LiveView) that brings benefits that often merit its regular use, and are mandated for things like video mode.

I think you're presenting some sort of knowledge gap that is contrived. I used SLRs for years. I seldom use it anymore because the results of my smartphone, under many normal circumstances, are comparable. This is the case for many people. The camera you have and all of that, and they just keep getting better and better.


The primary benefit of a larger sensor is shooting in sub-optimal conditions. If all you do is take photos in perfect daylight then a smartphone can likely do as well.

SLRs shine when there is more lighting range (e.g. strong shadows and light) in the shot and in particular when it is a lower-light situation. A smartphone has to start doing HDR (which has inherent downsides, like its inability to record motion) a lot sooner than a lot of SLRs as the dynamic range is wider on the latter.

You also lose physical controls and the ability to change lenses on a smartphone, which slows you down, and makes your shots look more samey. The viewfinder (either EVF or OVF) is also a huge benefit when shooting in either strong sunlight or extreme darkness (as the LCD's backlight can bleed light into your shot).

As to contrast Vs. phase detection: Contrast is very accurate for non-moving targets and even some moving targets (thanks largely to software). Phase detection is still the king of the castle when it comes to high speed and erratic motion however, by far.


Yes, of course. I'm not saying that SLRs don't have a place. If you're a wedding photographer, you probably shouldn't bring your iPhone to the job.

But the post I replied to took it to the point of absurdity, where people questioned whether these were really "cameras", and you either have the fools amazed by their crummy results, or the pros who see through that ruse. That is nonsense. I've owned a number of SLRs through the years, back to a 35MM Minolta 9xi with a number of lenses. I am amazed at what is possible with smartphones, and constantly surprised at the very good shots they yield.

Further, just to put this into perspective, an iPhone 6 or a recent Lumia, with its tiny little sensor and tiny little lens, has attributes that beat out God like SLRs from just a few years ago. Those cameras were pro quality, best of the best then, but now the devices that beat them aren't worth calling cameras?

http://connect.dpreview.com/post/5533410947/smartphones-vers...


That article doesn't support your conclusions. The graphs is totally worthless as the article admits:

> The graph is misleading at first glance because the phones and the cameras sit on different scales. So it’s not saying that the phones are better than current DSLRs, despite scoring higher.

So your conclusion:

> Further, just to put this into perspective, an iPhone 6 or a recent Lumia, with its tiny little sensor and tiny little lens, has attributes that beat out God like SLRs from just a few years ago.

Is not only not supported by contradicted by your link. Did you actually read it or just look at the graph and link?


"Splitting the difference between candlelight and daylight, around 6 years of technology has made up for the massive difference in the size of the lenses and sensors between the best phone and the $2,000 DSLRs."

"Phil Askey from DPReview described it as “the absolute best in its class, with the best image quality, lowest high sensitivity noise, superb build quality and excellent price.” He described the “Excellent resolution”, the “Noise free ‘silky smooth’ images”, with “very low noise levels even at ISO 1600.” The EOS 10D ran rings around the film that we’d been using for 50 years in terms of clarity and freedom from grain.

Yet it’s comprehensively humbled by modern phones. The iPhone out-shoots it, and the Nokia out-resolves it, all by huge margins."

Should I continue? Did you actually read it? Did you just rush to it to find a confirmation that SLRs are better? Of course they are better. Its point is that the same SLRs we lauded as extraordinary a mere decade ago -- and in some cases much more recently -- are now humbled, badly, by standard smartphones. Do you think the SLR owner of a decade ago was saying "yes, my device is junk. Just sticking with the form factor until it gets decent."?

As to the chart that you seemed to take offense at, its point was demonstrating the extremely rapid improvement in smartphone cameras. Each year the camera is significantly better than the year before, still in the tiny, integrated little module.


The automatic alignment helps prevent ghosts from camera shake. But it would be interesting to see how they avoid ghosts from moving subjects like leaves in the wind if they are effectively exposing longer (by taking several photos in a row).


One should remember that writing camera software that is agnostic to the hardware is technically hard, and nearly impossible if you want gorgeous photos (a la iPhone). When taking astronomy images we would characterize the CCD (mostly for noise) each night, and you need that kind of approach to tune your software to your hardware.


> One should remember that writing camera software that is agnostic to the hardware is technically hard

In this case, is it?

Their solution involves using burst mode, then taking those many pictures and turning them into 1 high quality image. Burst mode would simply output a bunch of .jpg images... couldn't you run the algorithm on those standardized images?


You get called dark current (thermal noise) on CMOS and CCD sensors, and the signal-to-noise ratio goes down as you decrease the exposure time. You see the effect when taking a low light photo, the graininess is from the SNR being very low while the noise is showing through. You can smooth this out quite a bit by combining burst photos (see Cortex Camera app) as the noise is random, but you always get better results with more light for each exposure.


Considering the many advantages to shooting in RAW, it is safe to assume that all one-size-fits-all software processing (a la iPhone) are vastly inferior to performing the image processing manually (with the data directly off the censor, uncompressed, and unaltered)


Would this be the end user doing that? Although you can't raw out of an iPhone it's not one-size-fits-all, I'd say it's the opposite as they have tuned the processing for that exact sensor/lens combination after testing.

To be clear, I would LOVE to get raw out of an iOS device camera. It would make motion analysis much more precise. With iOS 8 you have kinda/sorta control over the exposure time, but it could be more precise.


My understanding is that most flagship camera sensors are made by Sony, so there may be less diversity of hardware than there appears to be.


> However, bracketing is not actually necessary; one can use the same exposure time in every shot. By using a short exposure HDR+ avoids blowing out highlights, and by combining enough shots it reduces noise in the shadows.

Does this mean they're using ISO bracketing instead of exposure bracketing?


No, it takes multiple photos with the same settings and just averages out the noise in the shadows. That wouldn't keep the shadows from lacking detail though. You'd just get a smoother black!


You're correct that it's taking multiple photos with the same settings, but the trick is that it can choose different settings than it would have for a single photo. So instead of a single medium-ISO shot that loses all shadow detail but minimizes noise, it could take multiple high-ISO shots to capture the shadow detail and then average them to minimize noise.

So if you think of your dynamic range as being limited by the noise floor and the overexposure ceiling, this technique lowers the noise floor while keeping the ceiling right where it is, giving you a better effective dynamic range. ISO bracketing extends both the floor and ceiling by a much larger margin, of course, but its has issues that complicate its use in automated HDR shots, as described in the article.


I think it should work for shadows also? Adding together 10 photos taken at 1/100s should give the same amount of light as one photo taken at 1/10s.


But if each photo is just black because it's underexposed, then adding them together only gives you black.


If the pixels were literally black that would be the case, but even on very underexposed pictures I don't think that ever happens. There's still photons on the sensor, it's just that they are swamped by the noise. Like, most DSLRs nowadays capture 12-bit values, which suggests you would have to underexpose by something on the order of 12EV to truncate the output to zero.


Yes, and that is very, very possible. In any case, doing that would give you some relatively painful quantisation error, which is why the HDR approach works better.


From a statistical point of view, you can certainly get less noise, while preserving details, by averaging multiple shots.


With a tripod to stabilise a camera, is it possible to get more detail through multiple shots? Just averaging the noise would smooth things out, but I wonder if there is any more intelligent processing that could improve things. More shots could (in theory) provide more data.

A similar problem might be to try and improve the resolution of a single hand-held photo, shot in normal lighting conditions, by taking many pictures in rapid succession and then processing them. Each picture would be in a very slightly different position and so (again, in theory) might provide more data?


No, you can only get cleaner data and maximize the pixels you have, not more effective pixels. The way you could get "more" data is to simulate, say, a 28mm equivalent lens with, say, multiple shots from an 135mm equivalent lens at different locations and stitch them together.

Thats what they do when you see those 3GP+ shots you can zoom in on.


You an absolutely gwt subpixel detail by averaging multiple photos taken with the same setting with almost exactly the same frame. I had software in 2004 that did this, converting short video to high res photo


You can effectively with increase the ISO in the dark areas without increasing the noise by averaging together multiple photos


It's unfortunate that HDR mode in Google Camera remains unavailable for phones other than the Nexus 5 & 6 -- I have a Moto X, and the app is really only useful for photospheres, since normal photos tend look awful without HDR.

Has Google discussed any technical reason for this restriction? Seems like lots of third party apps support HDR on a wider variety of phones...


Google's HDR+ is different from regular HDR, which can be acheived through burst exposure bracketing and computation alone. HDR+ also performs stabilization and other tricks to improve the result, and these operations might depend on additional hints from the hardware to acheive the desired accuracy. OEMs are free to implement such features in their own apps, specialized to their own hardware, and most in fact do.


HDR+ is available only on Nexus 5 and Nexus 6, because only those devices currently fully support the Camera2 API, which is required for HDR+. This will gradually change, and more devices will start offering it.


Lens Blur is pretty great, too, and it's available for others. I think the restriction may be the fact that it has to 1) support OIS and 2) Google has to support that specific OIS.

It would be nice if Google could do something similar for EIS phones, too, though.


It's worth noting that with HDR+ enabled on the Nexus 5, you can no longer enable the flash or timer. And the processing time can feel frustratingly long. So leaving it enabled all the time doesn't work for everyone.


Not sure if there will be a change in the processing itself (bar filling up afterwards), but with Lollipop taking the picture (circle loading) should be a lot faster. They also say it's 0.3-1 sec, which isn't too bad. Normal photos used to take that long on high-end phones a couple of years ago.


Does anyone know if this is the same technique that iOS uses for HDR (I know the basic principle of multiple exposures is the same). Are there different algorithms at play here?


I posted this 2 hours ago [1] and both the posts are in home; the URLs differ just by a trailing question mark.

How come the duplicate detector didn't trigger?

[1] https://news.ycombinator.com/item?id=8515417


I think the duplicate detector looks at exact URL matches. Things like a trailing / or ? will let a new story go through.


The duplicate detector is usually quite more clever than just exact URL match. Why the downvote?

EDIT: In fact, I just tried to re-submit an existing link by adding a question mark and it didn't go through.


    > The duplicate detector is usually quite more clever than 
    > just exact URL match
This doesn't match my experience, but then I usually editorialize my titles if I'm resubmitting something I know is here already.


Yikes. That first picture looks awful. The HDR+ version looks like cheap CGI. Very plasticy. Some of the other pics are nice so that seems like a really weird one to lead with.


As someone who is a pretty serious hobbyist photographer, it is actually the second picture that looks very unnatural to me.

The first one I can imagine existing in "real life" if there were a soft-box enclosed fill light out of the frame that was lighting up the lady's face (it can't be the sun since the tree isn't being lit the way it would if the light hitting her was point-light-esque), which would make the photo very "posed", but still possibly something that isn't highly post-processed.

The second picture with the two ladies is much more distracting. The HDR version keeps a lot more detail than the non-HDR image, but at the expense of making everything look extremely flat and unnatural as your brain tries to process how the lighting is working (this may be unique to people who are used to worrying about light with regards to photography and a non-issue for normal folks, I can't say for sure) since the scene is clearly midday but the light across the entire scene makes it seem like the sun must be very low, which doesn't match with the contents of the scene.

The HDR version of the second photo would look better if the exposure on the two ladies were bumped up close to the value that the background sky was bumped down, but doing that automatically would be an amazing visual detection feat that I wouldn't expect out of a phone camera. The lighting still wouldn't make sense but at least it wouldn't look so flat.


The interesting thing about the second HDR+ photo is that it seems to have true high dynamic range instead of the limited range created by different exposure settings. The subjects are actually darker than in the non-HDR picture. I'm not saying that it looks great, just that it's interesting from a technical perspective.


the "HDR" look can be overdone.

If its done right you don't notice it. Since Photoshop added HDR photo merging its been a thing to capture the full dynamic range of a scene and make it look HDR.

google "hdr photos" to see it overdone. https://www.google.com/search?q=hdr+photos


The blogspot fails to discuss why Samsung Galaxy S4/S5 HDR results are much better than my Nexus 5's. Also, there are limitations in leaving it on. Besides it being slow, it also has other restrictions like the inability to use Flash while in HDR+ mode. Howsoever good it may be by default, flash is sometimes important not only in dark, but also when facing sunlight while taking the shot.


> The blogspot fails to discuss why Samsung Galaxy S4/S5 HDR results are much better than my Nexus 5's

Well, $300 price difference can buy you a lot.

Now I get it. Comments like this one must be why Google felt pressured into making the next Nexus phone so much more expensive :P


Probably has a bit to do with it... I, for one was really happy with a great mid-range option that was by far the best value in each release... still on my N4, and not sure what I'm going to get as an upgrade.


Yep, I'm with you.

I'm more or less fine with the nexus 5's camera (usually if an event is important to capture with great quality, someone else has a better camera already there), so I'd much rather the nexuses stick to the price point than go up in quality for a disproportionately larger increase in price.


Galaxy S4 on stock rom photos look amazing. S4 on Google rom photos look like shit. It's in the software.


Indeed. The pre-Lollipop stock camera software was crap. Now it seems much better. Also Samsung's phones, even the S5, seem to be crap in the low-light/dark, too. They only excel in good to medium lighting.


The blogger wrote SynthCam for iOS first: https://sites.google.com/site/marclevoy/

Some similar technology, my impression is less "automatic" I think but more control. Haven't seen either in action so I'm not sure how it compares.


I have an Android phone, but what is described in the post sounds a lot like what the app Average Cam Pro [1] does on my iPad (multiple exposures, average them out) with the refinement that the ACP app also allows you to subtract a "dark image" to combat sensor-specific banding noise, plus the user can adjust exposure after the shot.

It takes the cleanest pictures by far, beating out my Canon 7D at times, but for optimal results it does require the device to be extremely steady. I've been waiting for this app to appear on Android too, but it hasn't so far.

[1] https://itunes.apple.com/us/app/average-camera-pro/id4155778...


PS some sample shots created by another photographer: http://blog.instagram.com/post/66187514665/howishootwater

Edit: I just discovered Flickr is filled with em. Some good examples on what happens when the number of shots you select (anywhere from 4-64 I believe) is less than optimal if you scroll down: https://www.flickr.com/search/?q=%23avgcampro

I use Avg Cam Pro in touristy situations with lots of movement. Take 64 shots and (moving) people simply fade away by the time you're done.


Every picture that is on my VSCO Grid was taken with the HDR+ Photo app. Please note: I have manipulated most of the images. After purchasing the N5 last year, I quickly learned that the Nexus 5 camera should always be set on HDR+. Even though the HDR process takes multiple images. Many of the pictures on my VSCO grid were taken in motion. They still have a focus and do not look blurry.

http://fantasma.vsco.co/


Why can't Canon and Nikon DSLRs provide some basic built-in HDR?


At least some modern DSLRs do. My Canon 5D Mark III supports hand-held HDR for example. It did take a while to find its way into higher-end models though. I assume the thinking was in part that the primary audience would be shooting on tripods and could just combine bracketed photos on the computer.


Yeah, this is fine. But my Nexus 5 takes half of its pictures out of focus. It is really, really frustrating. Can't they solve that hard problem first? :-)


HDR in my Nexus 4 allowed me to take almost focus photos on live music events, until it started to crash the camera / reboot the phone, oh well.


Not really interesting since other OEMs have had this for years. Good to see Google trying to keep up with their low cost alternative offerings, however.


I have had HDR+ for over a year. This post is an overview of the technology and not a product launch.


Meh. The camera software on Android is so bad that a default-setting picture from an iPhone looks as good or better than an HDR picture on Android. People have demonstrated the quality you can get from the sensor on a Nexus 5 if you capture raw and process offline[1]. I realize that these Google[x] researchers are probably not responsible for the mainline camera application, but the organizational details of Android management don't interest me. What would interest me would be an Android camera app that captures decent pictures, has usable auto-exposure and auto-focus algorithms, and doesn't take tens of seconds to start.

1: http://imgur.com/a/qQkkR#0


All that guy seems to have done is manually set the ISO to 100 when taking a relatively dark picture with his phone on a tripod. If the camera app did that by default pretty much every photo would be blurry


I have a Nexus 5 at the moment and while the pictures are not amazing, what boggles my mind is that the camera sensor and the phone's screen don't have the same aspect ratio!


It usually makes more sense for sensors to be closer to a circular shape (i.e. square aspect ratio) because the image projected by the lens is going to be circular anyway. So if you have a very wide sensor you lose a lot of light on the top and bottom in order to expose the sides correctly.


You can just crop it if it bothers you. The lenses throw a circle of light, so the farther the aspect ratio gets from 1:1 the more lens you are wasting. What surprised me was the new Blackberry has a square (1:1) screen but doesn't capitalize on it with a 1:1 camera sensor. Probably they couldn't source a decent sensor with that aspect ratio.


Worth noting that this is the case across most compact camera devices. Most smartphone camera sensors are 4:3 (e.g. iPhones), while the screen on the device is often 16:9 or 16:10. In contrast full-sized SLRs are usually 3:2 owing to the 35mm legacy.


How is it fair to compare the default settings, which are presumably optimized for handheld shots, to a half-second exposure from a tripod? Of course a picture with a stable half-second exposure will be far, far better than anything the defaults come up with. But it's effectively complaining that a tripod gives better results than handheld shots.


I guess HDR+ is supposed to close that gap. Recent iPhones do something very similar all the time.


HDR is provided as an additional, alternative photo by the iOS camera app, if the scene is deemed to benefit from HDR processing.


What a dismissive bunch of prattle. Meh.

This is about algorithms to achieve better results with a given sensor. The Nexus 5, like the various Nexii that came before, has a poor image sensor -- it's a $300 smartphone, and LG wasn't going to put the best in it. HDR+ gives rather decent results in a wide range of settings despite that, and mine has managed a large number of fantastic shots.

The Nexus 6, being twice the price, apparently has a fantastic sensor, and with that dramatically better base results, made even better with HDR+. Awesome. That's good.


I'm hoping the 6's optical image stabilization will help make HDR+ even better.


The 5 already has OIS. Is the 6 supposed to be better?


My mistake, I forgot that the 5 had OIS. That was actually the feature that compelled me to accept the camera downgrade when I bought mine. Regardless, the 6 has a much better camera than the 5, so I imagine that its OIS will be at least somewhat improved.


The Nexus 5 has a really excellent Sony camera module with 1.4 micron pixels. It's comparable to bordering on identical to the sensor in the iPhone 5s, which produces far better images.


They are entirely different sensors, and share nothing at all in common. To start, the Nexus 5 has a 1/3.2" sensor, while the 5s has a 1/3.0" sensor (making evident from the start that in no universe are they "bordering on identical"). The iPhone has a f2.2 lens assembly. The Nexus has a f2.5 lens assembly. Empirically in both of those cases the iPhone has the superior option, not even getting into the specifics of tiering and segmentation.

The Nexus 5 has a camera sensor equivalent with the iPhone 5, on paper, but is handily and easily beaten by the 5s, and it shows.


> They are entirely different sensors, and share nothing at all in common.

I wouldn't say that. The 5s uses the IMX145 and the Nexus 5 uses the IMX179. Both are Sony CMOS arrays.

The 1/3.0 size of the 5s array does enable larger individual photon 'buckets' for better light sensitivity but they're both closely related in terms of contemporary Sony technology.

The faster 5s lens is a key differentiator, I agree. But it is rather outclassed by the the Galaxy S5 and the Xperia Z2 / Z3. No serious Android-using photographer looks much beyond those for a smartphone.


You're right, I meant the 5, not the 5s. No reason to get so upset.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: