Hacker News new | past | comments | ask | show | jobs | submit login
New Camera Sensor Eliminates Need for Flash (technewsdaily.com)
63 points by bcn on June 1, 2013 | hide | past | favorite | 47 comments



What a strange story, basically "Oh gee, we've created a sensor that is better than all other sensors using the material that everyone thinks is a super material" followed by "If the industry chooses to adopt his design, Wang said it could lead to cheaper, lighter cameras with longer battery lives for all."

Really? Ok so if you really create a sensor with a 1000x the light gathering capacity of CMOS sensors and a commensurate 1000x reduced sensitivity to noise, and if it can be manufactured in volume, why wouldn't someone build a camera with it? So why the 'If' in that last paragraph?

This guy published a paper two years ago [1] on creating light cavities in Graphene, This is the paper that they article is working from [2] apparently.

[1] http://link.springer.com/article/10.1007/s11468-011-9260-1

[2] http://www.nature.com/ncomms/journal/v4/n5/abs/ncomms2830.ht...


From a different source:

"In reality, though, and contrary to some big-name publications, this graphene sensor isn’t going to replace the silicon sensor in your camera. Graphene is still incredibly hard to work with on a commercial scale (here the researchers are still mechanically exfoliating graphene and placing it on a silicon substrate with tweezers), and there’s no indication that this method would ever scale up. What is far more likely is that these graphene photodetectors might be used in optoelectronics, where optical and electronic components are squeezed into the same system/chip, or in enabling faster fiber-optic networks."

http://www.extremetech.com/extreme/157082-graphene-sensor-is...


If it's possible to do a manufacturing process by hand with tweezers, I'll bet it's possible to invent a machine that does the same process faster and at scale.


Yeah, that also struck me as very odd.

Assuming all the other claims are true, my best explanation is that this guy's research is coming out of left field, and so he is not very familiar with how the industry works, and so he can't be certain of anything when it comes to how his research will be used.


That sort of caution is very typical scientist-speak.

I realize that we hear a lot in the media from scientists who do not speak in this cautious way - they get publicity precisely because they are (among the minority) willing to make bold claims about future applications of their research.


Hedging his speech. There could be potential hangups he hasn't mentioned or just doesn't know about, for example.


To accomodate the 1000x more sensitive sensor you will need a really fast shutter speed - maybe something like 1/1000000 of a second. Because shutters are mechanical devices, it might be difficult to achieve this without another leap in engineering. I am assuming that we cannot do much with the lenses.



Thanks! I wasn't aware of these electronic shutters.


"1000 times more sensitive"

What does that even mean?

Even 10 times higher quantum efficiency shooting wide open on optical band targets would be physically impossible.

On top of that, 'Eliminates need for flash' is not a great title - most people who care about photography don't use flash much for direct illumination; very flat targets, indirect illumination, close-range zoomed in macro shots, and filling in a dark foreground are the exceptions. Cell phone shots look like cell phone shots in part 1) because they compensate for the tiny sensor with crappy LEDs, not even proper xenon bulbs, but mainly 2) because a 1/4" class sensor can only offer 1/3 the SNR of a cheap point & shoot 1/2.3" class sensor on a good day, for the same level of illumination: they need the flash to work, and so you get flash-based shots, which usually look horrible because of the distinctive way it lights the scene.


> ...most people who care about photography don't use flash much for direct illumination...

You might want to tell that to the nice people at Elinchrom, Profoto, Broncolor, Hensel, Bowens, Quantum, Paul C. Buff, and so on. Even on-camera flash for good wedding and event photography is normally the primary lighting indoors, since it can be controlled when the environmental (ambient) lighting can't be. "People who care about photography" master the light rather than letting it master them.


I was careful with my wording: 'direct illumination' - point xenon/LED directional flash at target, shoot, enjoy off-axis vignetting, severe distance limitations, extreme inverse square contrast effect, & sharp nearly-incident shadows. Reflectors ('umbrellas'), big diffusers, aiming the flash at the ceiling, all the hardware that the places you mentioned sell, are aimed at avoiding these effects while still controlling the light quality.

Amateur photographers don't know this. They use on-camera flash because their cameras force them to use flash to get a reasonable signal to noise ratio. For them (who will never even attempt to use specialized flash diffusers/reflectors), the best option for dynamic indoor & evening scenes is a bigger sensor camera, or if they want to get really fancy, aiming a speed flash at the ceiling.


Much of the lighting I do is very much direct lighting; softboxen and other play-it-safe modifiers have their place, but you couldn't emulate, say, Karsh's style with them. And the inverse-square law is your friend, not your enemy. Light is merely a tool to get the shadows in the right places.


We photograph about 20-30 weddings per year, and I can vouch for the control aspect. The color of light matters. A lot. Being able to consistently be at 5000K (the color temperature of most studio strobes) makes editing incredibly easier and quicker. Color casts from using primarily or exclusively ambient lighting can be hard to correct for and even virtually impossible in mixed lighting situations.


I was thinking more about the direction and size of the light source, actually (with contrast coming in third). I can always throw on a cut of CTS or CTO to get a near-match to the predominant ambient, and that, too, gives me a known colour temperature so RAW processor presets (or batch applications of adjustments or stored camera profiles if I'm using, say, a Color Checker Passport) can work just fine. The flash makes the diffeerence between taking your subjects to where the light is "good" and making good light happen where your subject happens to be. (Completely killing ambient means losing the context, which may be a good thing or a bad thing.)

A more sensitive and efficient sensor may make portable continuous light sources more practical in the field (modulo photon shot noise — no sensor can make light a less probabalistic phenomenon), but it doesn't eliminate the need to make good pictures under sometimes unfavourable circumstances. And unlike the stereotypical landscape photographer, an event photographer can't just pack it up and come back later when the light is better.


> most people who care about photography don't use flash much for direct illumination

I used to be adamantly against flash too. Then I read some of The Strobist blog. Now I am only against bad use of flash, the type built into P&S cameras.

* http://strobist.blogspot.com/

Warning, it is very well written and way too informative. I don't even own a DSLR, but I really want to get my own remote speedlight just for the occasions when I use my friends' nice cameras.


> Even 10 times higher quantum efficiency shooting wide open on optical band targets would be physically impossible.

Could you tell more about the limitations? I always felt current sensors are very insensitive and hoped that the sensitivity could be improved immensely. Even sensitivity such as cats eyes have would be awesome.


Sure, I'll explain.

Once you start working with extremely sensitive sensors in very dim lighting conditions, you are basically counting the number of photons hitting each pixel. Quantum efficiency is a measure of the percentage of photons which are counted. A quick Google search turns up a paper measuring quantum efficiency of a CMOS sensor, with the sensor in question measured at 37% (meaning a 3x improvement is physically impossible):

http://www-isl.stanford.edu/~abbas/group/papers_and_pub/qe_s...

If you want to see what that translates to, look at high-end cameras. Bigger cameras tend to be more sensitive because the pixels are larger, and the support circuitry takes up a lower percentage of the surface area, leaving more of the sensor's surface area for the actual sensor. The current generation of DSLRs can go up to ISO 25,600 (or higher, actually). Searching Flickr will show you a number of pictures in extreme low light, taken without a tripod, yielding better detail than I expect my eyes would be able to discern (note: humans, compared to most animals, have excellent night vision).

http://www.flickr.com/search/?q=iso25600

If you're willing to sacrifice resolution you can get even more sensitivity, which enables you to do crazy things like shoot video of the Milky Way or a moonlit landscape. An experimental Canon video sensor shows this off:

http://petapixel.com/2013/03/04/canon-unveils-a-35mm-full-fr...

By comparison, film has quantum efficiency below 10%, at least according to Wikipedia. Photographers were quick to ditch film in the ISO 800+ range, and I rarely use film as fast as ISO 400 since digital is so much more sensitive.


Using the latest DSLRs you can film under moonlight now.

https://vimeo.com/21311814

What we need is lower noise.


Can you see how many light sources there are close to the beach in many of the shots? Watch the shape and size of the shadows that girl casts on the sand. Moonlight only? Not so much...


Are you kidding me? At 1m15s you can actually see the stars behind her.


So if only a 3x improvement is theoretically possible, how do you explain that the article talks about a 1000x improvement?


I think the GP's point was to call that claim into question. I'm sceptical about the 1000x improvement claim as well. CCD sensors manufactured for astrophotography achieve 60+% quantum efficiency.


Thank you for that explanation, very clear!


I think the efficiency of these sensors is quite good, latest generation full frame camera sensor operate well at lower ligh t than you can see in, for example.

Fitting more of them into tiny arrays and maintaining exposed surface is more of a problem, and noise properties could be better.

At the end of the day though, there are limits to what can ever be achieved with tiny sensors like we currently see in most cel phones; you can pack things in more tightly but you can't avoid the optical physics.


From the article, it sounded like the improvement was more around better retention of photo electrons, rather than enhanced efficiency. That would mean potentially longer useful exposure times due to deeper wells.


> better retention of photo electrons, rather than enhanced efficiency

Same thing. http://en.wikipedia.org/wiki/Quantum_efficiency


Parent is not talking about the percentage of photons that are translated into charge, but about the amount of charge that can be held on a given area during exposure without saturating & spilling over to surrounding pixels. Deeper electron wells with the same amount of readout noise would increase the dynamic range of the image; I don't see how we can deduce that this was the stated benefit from the article though, or how that benefit could even be recognized given the early stage of the technology.


Here's the PDF version of the paper, http://cdpt.ntu.edu.sg/Documents/ncomms%204%201811.pdf

This is very exciting and has multiple uses:

1 - drastic improvement to P&S and mobile photography (largest source of photos now). Most of these photos at night suffer from noise and harsh flashes and significant redeye

2 - improving dynamic range in SLR by providing two photodiodes per each pixel. One for high sensitivity (large) and one for low sensitivity to preserve highlights.

3 - providing a great boost to the micro 4/3 systems. The 4/3 makes a great portable platform (size, weight, etc.) however suffers from tremendous shadow noise and as such make it less ideal as a replacement for full-frame (35mm sensors ) SLRs.

4 - might give a new life to Lytro and similar systems. It would allow them to provide higher resolutions images to make them appealing to more demanding phitographers.

5 - worst nightmare for people concerned with government intrusion. It enables cameras to operate in dark streets or other places and keep people under constant watch-day or night.


I wonder if it'll affect the 'look' of the image too (when you put all of these things together). I've never been a fan of the way digital looked (curse you Bayer filter!!), except for maybe the Foveon sensor now. I've actually been having a lot of fun shooting film lately, and scanning it. Skin tones baby!


With increase resolution, Bayer's side effects become less and less visible. Nikon has started to remove antialiasing from their recent cameras. Color analog film may look better SOOC for soe images, but you can exceed that in post-processing


I guess the problem I still see is like in this image (film left, digital right): http://annawu.com/blog/wp-content/uploads/2011/03/81420015b_... The shadows just fall differently, the skin has a nicer tone (my opinion)... less harsh. I'm sure you could Photoshop/grade it to match more, but only to a certain extent.

I'm all for the convenience of digital, just wish they'd improve sensor technology rather than megapixels. But anyway, I'm getting a bit off topic...


Well the film image looks slightly out of focus and the lighting looks marginally different, so it is not the best comparison. But there are differences, although they are much more noticeable on larger than 35mm film - if you really want something with a different look shoot medium format or large format. You can't really do stuff like George Hurrell any other way http://georgehurrell.com/


I know SOOC, but "soe images"? Thanks


Some, I think they miss-typed.


Just recall that less than ten years ago, nobody even knew of this material and it was only the subject of strange theoretical research at ivory tower universities.

Please continue to fund science. =)


Today a lot more people know of the material, but it is still known as the subject of strange theoretical research at ivory tower universities.

I 100% support the idea of funding science and graphene is very exciting, but you still can't buy any of these wonder products it is going to be responsible for and you won't be able to for many years yet because nobody knows how to manufacture it at useful scale.

IMO this is probably one of the areas in which most commercial entities limiting or eliminating their long-term R&D in the name of short-term gains is probably hurting us. If there were more effective cross-over between industrial manufacturers (which have mostly moved the manufacturing offshore anyway) and theoretical research, there would be a better pollination of ideas that might help solve these problems quicker.


The article says this new sensor is ~1000x more sensitive than the sensors in today's cameras. Comparing it with the extremetech article linked by chaz, it's clear what's happened:

The sensor is not ~1000x more sensitive than the sensors in today's cameras. It's ~1000x more sensitive than previous graphene sensors.

(And the 1000x improvement is when you measure amps of current generated per watt of incident light, which isn't necessarily the best measure of how well the sensor will actually perform.)


Might benefit the casual snapshot photographer, but pros use flash more for fill-in lighting when there are sharp shadows and for freezing motion e.g. in sports photography.


Avoid making a commotion, just as you wouldn’t stir up the water before fishing. Don’t use a flash out of respect for the natural lighting, even when there isn’t any. If these rules aren’t followed, the photographer becomes unbearably obstrusive. - Henri Cartier-Bresson

"American Photo", September/October 1997, page: 76 http://www.photoquotes.com/showquotes.aspx?id=98

http://en.wikipedia.org/wiki/Henri_Cartier-Bresson


There is an entire category of pros--documentary photographers--who love high ISOs and will love higher ISOs even more. By comparison, casual snapshot photographers are more likely to use on-camera flash in low light conditions. But that's just my personal observation.


Sounds interesting although I suspect it's the fact that the sensor uses graphene that makes this a news item.

I wonder how fast a sensor like this would be. If it's so sensitive I'm guessing that you can build a super-fast camera, which is something engineers are always looking for. For our high-speed applications in the microsecond exposure time it's hard to get enough light and fast enough shuttering. Even if this thing isn't going to replace CMOS and CCD it could be interesting for high speed metrology.


If it gathers light in the infrared spectrum, how is noise eliminated? Ambient room temperature is an infrared-peak blackbody. I'd like to see more on the filtering that has to go into images taken with such a sensor.


"infrared" is actually an extremely large portion of the spectrum when viewed on a proper log scale. It's a poor label because it often leads to this confusion.

400nm to 700nm is 'visible light', a factor of 1.75x, infrared is 700nm to 1mm, a factor of 1428x.

Only a small portion of this is significant for thermal infrared applications, which at room temperatures are about 8um-15um.

Near infrared, up to about 1500nm, acts pretty much like light we can't see, and can be detected well on the same CCD/CMOS sensors (which need filters to block it out).


Practically every CMOS and CCD camera today has an infrared filter window since the sensors are particularly sensitive in infrared. Homemade night-vision cameras usually involve removing that filter.


But then how do I use my phone as flashlight?


I've actually read this (in my mind) as: "No need for flash plugins to connect to a computer's laptop"...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: