Hacker News new | past | comments | ask | show | jobs | submit login
Color Night Vision (2016) [video] (kottke.org)
625 points by Tomte on April 7, 2017 | hide | past | favorite | 156 comments



There is much discussion here regarding quantum efficiency (QE). Keep in mind that figures for sensors are generally _peak_ QE for a given colour filter array element. These can be quite high like 60-70%.

But - this is an 'area under the graph' issue. While it may peak at 60%, it can also fall off quickly and be much less efficient as the wavelength moves away from the peak for say red/green/blue.

From what I can tell from the tacky promo videos, the sensor is very sensitive for each colour over a wide range of wavelengths, probably from ultraviolet right up to 1200nm. That's a lot more photons being measured in any case, but especially at night.

Their use of the word 'broadband' sums it up. It's more sensitive over a much larger range of frequencies.

I also wouldn't be surprised if they are using a colour filter array with not only R/G/B but perhaps R/G/B/none or even R/IR/G/B/none. The no filter bit bringing in the high broadband sensitivity with the other pixels providing colour - don't need nearly as many of those.

Edit - one remarkable thing for me is based on the rough size of the sensor and the depth of field in the videos, this isn't using a lens much more than about f/2.4. You'd think it would be f/1.4 or thereabouts to get way more light but there is far too much DoF for that.


It would be interesting to see how this compares to theoretical limits. At a given brightness and collecting area, you get (with lossless optics) a certain number of photons per pixel per unit time. Unless your sensor does extraordinarily unlikely quantum stuff, at best it counts photons with some noise. The unavoidable limit is "shot noise": the number of photons in a given time is Poisson distributed, giving you noise according to the Poisson distribution.

At nonzero temperature, you have the further problem that your sensor has thermally excited electrons, which aren't necessarily a problem AFAIK. More importantly, the sensor glows. If the sensor registers many of its own emitted photons, you get lots of thermal noise.

Good low noise amplifiers for RF that are well matched to their antennas can avoid amplifying their own thermal emissions. I don't know how well CCDs can do at this.

Given that this is a military device, I'd assume the sensor is chilled.


Hang out with astronomers, they do this stuff all the time: poisson noise, dark current, readout noise, readout time, taking thousands of shots and stacking them according to the previous factors.


DSLRs have come with dark current removal for a few years now. Another fun fact, the sensor used in Canon's G15/16 has a Quantum Efficency of over 60%, meaning that 60% of photons reaching the sensor generate a signal. Most cameras have sensor with QEs between 15 to 25% (but have way larger sensors, so in the end a better SNR than the G16).


I should hope that they subtract off the dark current, astronomers have been doing that since the first time they used a CCD. Dark current is a big limitation to seeing dim things.

That's a much higher QE than the good old days, wow.


Here's a list of QE numbers: http://www.sensorgen.info/

I knew it was over 60%, and I remembered (wrong) as the G15 and G16 having the same sensor. Welp, 77% QE for the G16 and 67% for the G15. There are others with higher QE, though in general those are used in smaller sensors.

DSLRs, while having lower QE numbers, have much larger sensors, and hence a better image quality.


That's incredibly useful, thanks. The only problem is, I don't trust the numbers. How does the Nikon D2X get a QE of 476%?


What does a QE of >100% mean ?

(Sony) Alpha-900 234% 3.6 892070

(Pentax) Q 101% 1.6 8429

There's a few more.


According to Wikipedia "in the event of multiple exciton generation (MEG), quantum efficiencies of greater than 100% may be achieved since the incident photons have more than twice the band gap energy and can create two or more electron-hole pairs per incident photon." https://en.wikipedia.org/wiki/Quantum_efficiency

Pretty fascinating!


In some sense, this is useless. A sensor that counts 50% of incoming photons twice each has more shot noise than a sensor that counts each photon once.


I would take this list with a grain of salt. There are several entries with a QE in the nineties (e. g. the Leice C with 95%) which seems implausible to me. Even more, further down the list are some entries way over 100% (e. g. Nikon D2X with 476%).


Can't edit now, but dark current removal isn't as widespread as you think it should be. Most cameras don't do it at all and will add (actually, subtract) a dark frame when the exposure time goes over a threshold (say, longer than 1 sec exposure) in which the camera takes another new photograph, with the shutter closed and then subtracts that dark frame from the original picture which is still in memory. This is quite useful actually, but not the same quality as dark current removal/supression.

As for those saying about the over 100% QE numbers... I didn't gather those, all I can say that a few years back that list was accepted as generally reasonable measurments.

Besides that, I've read of some imaging techniques which can form images with a really low number of photons (for example [0]), but those generally require a special setup (infrared light, lasers, that sort of stuff). So yeah, those QE's do seem fishy, but it could alse be the result of a photon affecting more than one photosite (or maybe the marketing team hearing about QE and finding a way to raise that number through "smart counting").

[0] http://www.nature.com/articles/ncomms12046


One of the professors in my former department was an astronomer from the US Naval Observatory. He's applying techniques he developed (along with other techniques astronomers use) to get better resolution MRI images.


Theoretical limits are probably very high. For one, collecting area can be pretty large. Obviously you don't want a square meter lens, but a foot-wide one would probably be acceptable for a specialized camera.

Second, nocturnal animals like howls see very well in the dark. Granted, they don't see colors but they still show that in theory seeing in the night is physically impossible.

PS. regarding the collecting area, something I've been wondering for a bit : isn't the number of photons a lens can capture proportional to the square of the surface and not the surface itself? I know that sounds counter-intuitive but with interference, quantum mechanics and stuff, I checked the math once and I could not see where I was wrong.


> Second, nocturnal animals like howls see very well in the dark.

True, but that's because as far as we know they evolved on a very low light world orbiting a red dwarf star. They're not actually nocturnal on their homeworld.

They also supplement their incredible night vision with sonar that's in the audible range for humans, hence their name.


I don't know what you mean by "surface", but the number of photos collected depends on the area of the lens. No quantum mechanics or interference involved.


I did mean area. And I thought the number of photons was proportional to the square of the area because the probability amplitude is proportional to the area. Therefore the probability should be proportional to the square of the area, shoudn't it?

It also seemed to make sense to me otherwise we would not need to build large telescopes, we could just build lots of small ones and fuse the images.


No, the probability is proportional to the area, not the amplitude. If a sensor of size A sees N photons, a sensor of the same size next to it sees another N. Fusing these two sensors together see 2N, not 4N. Otherwise, you violate energy/momentum conservation.


I love this kind of explanations. You can skip a lot of hard math with a simple counterexample that shows the absurdity of proposed theory.


We do build small(er) ones and fuse the images (using interferometry), that's what many large telescopes are these days. In the radio, we've been mostly doing it that way since 1978 (VLA).


Interferometry, as its name indicates, uses interferences. So it adds probability amplitudes.


... we use interferomers because we want the added resolution, not just the additional photons. If you don't want extra resolution, you can just add up the images from all of the smaller telescopes.

Light is a wave and a particle, and if you are getting wildly different answers from thinking about it as a wave and as a particle, and you're looking at a macro and not micro scale, then you're doing it wrong. That's why I answered your wave question with a photon count answer.


> That's why I answered your wave question with a photon count answer.

But how do you count photons without using probability amplitudes? If you count them by using a classical reasoning of photons being small particles falling from the sky, I'd say you're doing it wrong, because photons are not classical particles.


CCDs count photons in a particular fashion, and it happens to involve individual photons doing things. You might think of it as photons knocking electrons off of atoms, but it's actually semiconductors with a narrow bandgap, so CCDs work at much lower energies than needed for ionizing radiation.


> the probability should be proportional to the square of the area, shoudn't it?

No.


What's wrong with the reasoning, then? Isn't the probability amplitude proportional to the area?


It's proportional to the area, which is already a term encompassing length^2.


If the probability amplitude is proportional to the area, it doesn't matter if that term already encompasses a length² term. QM tells you then that the probability should be proportional to the square of that amplitude, even if that means it would encompass a length^4 term.


And if we change to "proportional to the area of the lens, not the sensor" ?


Nope, nothing to do with that. A lens with a 1m diameter f/10 will be less bright than a lens with a 10 cm diameter f/1.4



Not sure why I cannot reply to your other comment. Anyway, astronomy has nothing to do with the scarcity of photons. If you take a picture of M31 you have a relative abundance of photons, with most of them not being point sources. If you look at any planet or globular cluster or nebula is the same. Only when you are observing single stars you are having that. And single star observations are for sure not the whole astronomy.


HN requires a certain delay before responding, to encourage people to think first.

Many distant things, like galaxies, are very dim. M31 is almost visible to the naked eye, which is not dim to an astronomer. You might want to study astronomy before having strong opinions about it.


Seriously are you asking me to "study" astronomy? I was doing astronomical research (planetoids, variable stars, supernovae) 20-something years ago. And M31 is not "almost" visible to the naked eye. It is perfectly visible unless you live under a light polluted sky. The objects that I captured I think were just below the 17th magnitude. As I said before extremely dim objects cannot be seen simply increasing the aperture because that is helpful only for point sources. Hubble Space Telescope captured some of the most dim objects in the universe. With a "very small" mirror compared to some much bigger on this planet.


I'm aware HST has a relatively small diameter (2.4m), but how much of its resolving power (for lack of a better phrase) can be attributed to it being above the atmosphere? I've also heard that modern adaptive optics systems do a good job of mitigating atmospheric effects (turbulence?) for ground based telescopes -- I wonder if a ground-based "Hubble deep field" image could be generated?


Most of the HST resolving power is due to the fact of being outside of the atmosphere. Adaptive optics have been improving in the last 20 years if I recall correctly, but even with the advancement in lasers to generate improved artificial stars and in the segmented mirrors to have a better atmospheric correction in real time, they still can't do anything regarding the loss of trasmission, especially in wave lengths different from the visible light window. The fact that with a "small" diameter in nearly-ideal conditions you can have better results than much bigger apertures on the earth is still a testament that aperture alone can't do miracles, especially when there is a lot of noise. And don't forget that HST optics are flawed. Before the servicing mission that added the correction we were relaying on software adjustments. And even with that huge handicap the results were nothing short of amazing.


You might want to read the rules of HN more in depth. There's a specific call-out for comments like your last sentence.


That is valid only for point sources. Stars are not points when seen in a telescope because of airy disks, atmospherical scattering and imperfections in the optics. If you have perfect optics, you are outside of the atmosphere and the only limit is the airy disk then yes, for point sources you don't have that problem. Now, are you saying that a valley is a point source?


If you look back, I made a statement about the number of photons collected. I totally understand that camera people, who have a shitload of photons available most of the time, don't think about them like astronomers do. But if it's very dark, it becomes astronomy.


Erratum: instead of "they still show that in theory seeing in the night is physically impossible.", read "they still show that in theory seeing in the night is NOT physically impossible."


The resulting brightness has nothing to do with the lens size. The important factors are the focal ratio and the sensor size. Obviously the sensor should have an appropriate size to receive all the focused light otherwise it's just wasted.


C'mon HN this guy is right, you can't apply Gauss's Law with a sphere when you have photons from every angle.

However I'm not sure whether the total number of photons collected is the right thing to be measuring. Aren't there serious aberrations at high magnifications in practice?

And aren't there, y'know, glowing objects in the night sky? Stars I believe? The moon? I don't know how many photons there are but a single green photon carries a miniscule 376 zeptojoules and I don't think my eye responds to anything below the picojoule range. Counting photons and looking for the information limit seems a little extreme in this, er, light.

I wonder whether you could use this technology for medical imaging if you shield the camera from any light that isn't being transmitted through the body. The possibility of recovering color information is exciting.


Surely 'nothing' is an exaggeration. In the limit of the lens size going to zero, the brightness also goes to zero. (Ah, you're also scaling other things in your mind I guess, so then this is not true.)

If you keep the setup such that all the light that comes through the lens will get to the sensor, and keep the sensor the same size, a larger lens would mean a higher brightness, right? Maybe in practice you're limited by other factors, but 'nothing' can't be right. Especially when we're talking about theoretical limits.

Edit: I think what you're saying is that the lens doesn't matter for the brightness if you scale the focal length and sensor size in such a way that the brightness stays the same. Which is obviously true, but if you leave out the last part, it isn't.


A lens with a smaller focal length will produce a projected image smaller than the same lens with a longer focal length. A lens with a focal length of 1m and an aperture of 50 cm will produce a projected image that will be 4 times bigger than a lens with a focal length of 50 cm and 25 cm of aperture. So even though the amount of light captured in the first case is 4 times, the effective brightness is exactly the same because it is spread in an area 4 times bigger compared to the second lens. For this reason the only way to increase the brightness is to reduce the focal ratio, simply increasing the aperture maintaining the same focal ratio won't help. Obviously if you have an aperture of size 0 then all the light is wasted because you can't have a sensor with size zero. In an empirical way you can see this concept in the real world with the abysmally small lenses in the smartphones that have the same brightness of the much bigger lenses in SLR cameras or in refractor telescopes.


An important difference between animal (and our) eyes is that the biological systems use logarithmic intensity in signaling, not linear.

2 pulses encode a signal that's a factor higher than a 1 pulse signal.

You see this limitation in the video as well. That CCD video, the one with some vision and color but huge amounts of noise, I feel you see that it's clamping to some limited domain. Increasing it's range would improve things a lot.


I can see in a dark lab with my EMCCD camera. It doesn't look as good as in this video, but they got maybe a 3x improvement compared to what I can do at present, with minimal effort.


On the producent's website there is a comparison between their X27 and competition. EMCCD has fair 2nd place in this contest. It has more noise than X27, but what you can get is still remarkable.


> Given that this is a military device, I'd assume the sensor is chilled.

There's a promo video of the manufacturer. At 0:16 you can see something looking like cooling fins on the side. https://youtu.be/c_0s06ORTkY?t=16s


It's not enough to do crazy quantum stuff in the camera, the light source would have to be similarly manipulated in a quantum way as well.

My understanding is that the human eye has a surprisingly high quantum efficiency (about 12 to 30 % of photons are detected), and that the reason night looks dark is because there really aren't that many visible spectrum photons around.

My guess is that this camera is physically enormous.

Edit: Apparently, it's really small! I am dumbfounded. Then again, I guess it could have a hundred times the area of a human pupil and still be pretty small.


Humans can actually see well in the dark once adjusted but generally without colour (due to using rods rather than cones in the eye). The colour is probably what makes this video appear incredible to us as its not how we are used to percieving darkness (even with eg night vision).

As to the technical question according to this you need 90 photons for a 60% chance at a human response

http://math.ucr.edu/home/baez/physics/Quantum/see_a_photon.h...


> It's not enough to do crazy quantum stuff in the camera, the light source would have to be similarly manipulated in a quantum way as well.

Not entirely true. Assuming an incoherent source, conventional optics, and a well-defined frame (i.e. you measure for a time t and the source is unchanging for the whole time), you have a bunch of modes of incoming light that would hit each pixel. Measuring these modes in the photon number basis isn't optimal. The ideal measurement is probably something quite different. It may also depend on your exact assumptions avoid the source.


Ultimately you want to count photons to produce an image. And if it's an incoherent source, I don't think there's anything else you can do with it.

With coherent light it can be useful to have the light interact with matter in a way that depends on the phase of the light. But if the light is incoherent then that won't yield anything useful. And this still doesn't beat the shot noise limit - your phase measurements still come from counting photons subject to shot noise.

Then there's polarisation. Two polarisation states. But really that's just like saying there are two kinds of photons. Both polarisation states are subject to number-phase uncertainty principle.

I think that's it. There's the number and phase, which are conjugate variables. You can make number more accurate by squeezing phase, but only if you control the light source. And then there's the two polarisation states.

And the EM field simply doesn't have any more degrees of freedom than that, so I think that's it unfortunately.

It's number-phase squeezing or nothing if you want to count photons more accurately than shot noise.


You are missing something though: Using entangled detectors you can surpass the diffraction limit. Googling "quantum internet and diffraction limit" would bring up some more information. Either way, this would apply to resolution, not to sensitivity (which is what you seem to be covering)


Better video from their website comparing to other cameras:

https://www.youtube.com/watch?time_continue=328&v=c_0s06ORTk...

Anecdotal evidence on the internet suggests it's around £6k, but that seems far too low.


What's the point of SWIR?


There was a discussion in another thread (forgot where I saw it) where this exact same topic and video was discussed, and the summary is they seem to have completely messed up their SWIR setup. Well, it's pretty apparent, that technology cannot be as bad as what they show (showing nothing at all) or it would not exist.

Here is SWIR used to see through smoke:

- https://www.youtube.com/watch?v=GUUIgBut8RU

- https://www.youtube.com/watch?v=keRxJg-gjLE

- LWIR, MWIR, SWIR (long/medium/short wavelength infrared): https://www.youtube.com/watch?v=3pfzO26a21c and https://www.youtube.com/watch?v=iV4hNzDJbF0

- Overview SWIR cameras and applications: https://www.youtube.com/watch?v=Vi0x7D5u7Dk

Use cases for such cameras go way beyond night vision, see @5:13 in the last video for a list.


One would think with all the money the military throws into imaging technology that they would already have this.

For Special Operations use, it'd be nifty to have this technology digitally composited in real-time with MWIR imaging on the same wearable device. Base layer could be image intensification with this tech, then overlay any pixels from the MWIR layer above n temperature, and blend it at ~33% opacity. Enough to give an enemy a nice warm glow while still being able to see the expression on their face. Could even have specially made flashbangs that transmit an expected detonation timestamp to the goggles so they know to drop frames or otherwise aggressively filter the image.

Add some active hearing protection with sensitivity that far exceeds human hearing (obviously with tons of filtering/processing), and you're talking a soldier with truly superhuman senses.

That's not to mention active acoustic or EM mapping techniques so the user can see through walls. I mean, USSOCOM is already fast-tracking an "Iron Man" suit, so I don't see why they wouldn't want to replicate Batman's vision while they're at it.


> For Special Operations use, it'd be nifty to have this technology digitally composited in real-time with MWIR imaging on the same wearable device. Base layer could be image intensification with this tech, then overlay any pixels from the MWIR layer above n temperature, and blend it at ~33% opacity. Enough to give an enemy a nice warm glow while still being able to see the expression on their face

See the two images of an SUV (the top one has caption "Color Low Light Night Vision Midnight Image Fused with thermal infrared RED ALERT FLIR Image") on the product page [1].

[1]: https://www.x20.org/color-night-vision/


Nice. Any idea if it's digitally composited? I've seen image intensification/IR fusion implemented before with analog techniques and wasn't impressed; resulting image had a high amount of jitter and felt shoddy. Perhaps it was just that manufacturer's implementation.


> For Special Operations use, it'd be nifty to have this technology digitally composited in real-time with MWIR imaging on the same wearable device. Base layer could be image intensification with this tech, then overlay any pixels from the MWIR layer above n temperature, and blend it at ~33% opacity. Enough to give an enemy a nice warm glow while still being able to see the expression on their face.

I believe this is what CAT did with their S60 phone camera - fusing together the image of the thermal sensor and the camera to have a high-resolution thermal image even without an expensive-as-hell sensor.


> One would think with all the money the military throws into imaging technology that they would already have this.

They do... the OP points to a product sold to military.

The US government develops its military technology buy giving companies like this R&D grants and then buying their product.


Can someone wake me up in the future? When we have digital eyes, and we can walk around at night as if it were day except the stars would be glittering. Sometimes, I'm so sad to know I'll not live to know these things and I'm incredibly envious of future generations.


Be proud to know you helped hold the world together long enough for our descendants to get there.


I don't think there's a lot of people this applies to and the world doesnt currently feel like it's being held together at all.

It rather feels like it's rapidly coming apart, but that's seemingly okay for many people because you can just have a short position...


Try taking a break from reading/watching the news for a few weeks.


I dunno man, things seem really bright to me.


Thanks, stranger. You just made my day.


>I'm incredibly envious of future generations..

All things considered, you might not need to...


Your comment is the dual to the nostalgia of Woody Allen's Midnight In Paris.

You're already in the future, man!


You can get somewhat close to that with a Sony a7s these days: https://vimeo.com/105690274


I was about to post the same thing. It's not perfect but on the other hand it's been out for over a year (so the firmware is stable by now) and you can pick it up for around $2000, baybe less. I would seriously consider it if I were shooting an indie movie because what it sacrifices in image quality (not very much) is more than made up for by the ability to shoot without flood lights. You might still use small lights in a narrative context but for artistic purposes rather than just illuminate your work area. Unless you've worked on a film you have no idea how much hassle shooting at night is.


Impressive, for sure, but check the colors in the linked video as well (the one you linked is mostly grayish) as the fact that it was just starlight that made it work.


That grayishness is due to the S-Log tonal compression, not the sensor.


Yes, pity it's Sony. I went to hook my wife's Sony camera to wifi the other day. It requires an kernel extension for my Mac!

Ugh, no thanks.

I keep running into this sort of crud from Sony.

https://www.schneier.com/blog/archives/2005/11/sonys_drm_roo...

Do they ever learn?


They list a lot of potentially useful applications on the product's own web site. I wonder how long it will take for this sort of technology to be commercially viable for things like night vision driving aids. High-end executive cars have started to include night vision cameras now, but they're typically monochrome, small-screen affairs. I would think that projecting an image of this sort of clarity onto some sort of large windscreen HUD would be a huge benefit to road safety at night. Of course, if actually useful self-driving cars have taken over long before it's cost-effective to include a camera like this in regular vehicles, it's less interesting from that particular point of view.


Two thoughts come to mind:

1. It would be nice to see a split screen against a normal view of the scene as it would be seen by the typical naked eye.

2. Our light pollution must SUCK for nocturnal animals that see well at night.


It's likely pitch black to the human eye, except for the stars. We have Sony A7s, which goes to ISO 409000, and it already is close to night vision, way more sensitive that my eyes, especially paired with a f1.4 lens.

This camera goes to ISO 5 million, which is awesome.


Well there is this side-by-side comparison with other leading lowlight technologies https://www.youtube.com/watch?v=c_0s06ORTkY


For the first case, I think you want a view in normal lighting conditions, and not the low-light view of the scene. What expectations of a well-lit scene would look like in regular balanced full color (cloudless summer noon daylight or similar), and then the enhanced color low light version (overcast, moonless star light).

A split-screen view, with half the screen in pitch black wouldn't illustrate much.


That really is incredible. I wonder how they keep the noise level down and if the imaging hardware has to be chilled and if so how far down. Pity there is no image of the camera (and its support system), I'm really curious how large the whole package is. It could be anything from hand-held to 'umbilical to a truck' sized.

Watch when the camera tilts upwards and you see all the stars.


There's plenty of pictures of the camera and more info at the linked site: https://www.x20.org/color-night-vision/


What an amazingly small package. Thank you, I totally missed the link.

So, if I get this right this uses residual light from the stars and other sources to do the imaging, and judging by the speed of the update it's doing that with so short an exposure that you'd never even realize that if it wasn't mentioned.

That's one hell of a sensor, and probably quite a bit of on-board post-processing.

There is a starlight video on that page:

https://youtu.be/RbD9E6YyLbA


I'd like to see that filmed under a really dark sky. There was a lot of skyglow in there that interfered with the capture. But even so, I thought I recognized Orion, with the Great Nebula clearly visible at 0:30 (the linear series of smudges of light below the 3 bright stars sitting in a straight diagonal line)

You can see that nebula with the naked eye, but you need a pretty dark sky. I'm pretty sure it was not naked-eye-visible under that sky. Awesome camera.


They also mention something about the CCD being "large pixel pitch" or something to that nature, to gather more light per imaging element. This would explain the size of the camera, because we know that a 10 mega-pixel camera can be small enough for a phone. Make the pixels much bigger, and the whole array much larger (instead of something the size of a pencil eraser, for instance, make it many times larger - perhaps 400mm per side or something).


It's not that big actually:

https://www.x20.org/wp-content/uploads/2015/09/true-color-ni...

Maybe 6" horizontal and 4" vertical and the sensor smaller than that.

https://www.x20.org/wp-content/uploads/2015/09/IMG_0041.jpg

Judging by the pitch of the pads at the bottom that's about 2" wide and 1.5" tall.


They mention it's large format so over 100 mm sensor. https://en.m.wikipedia.org/wiki/Large_format


It's not actually large format. Fabrication would be impossibly expensive.


You could have multiple 35 mm format sensors, a 3x3 grid for example.


My biggest question after seeing that video is what use is SWIR?


Mentioned elsewhere, but it was stated that SWIR was likely not setup correctly. Many of the examples I've seen online since were not "low light", but "partially obscured" (aka smoke), so it might not be targeted for extreme low light settings (not looked to see, so I very likely could be wrong).


I'd guess the manufacturer sued and made them edit out the imagery in those segments.


... X20? Hmmm that brings memories of another camera business called X10 ... remember those?

They sold tiny wireless spy cameras or something, but most famously, were at one point almost synonymous with "Internet banner ads".

I remember a song "We must destroy X10!", which was protesting against the rising ubiquitousness of Internet banner ads (in general, although the X10 ads really were everywhere back then).


That is beautiful.

If they can increase the dynamic range to bring detail to the highlights it is basically perfect.

I've never seen a valley look like that with a blue sky above with stars in it. Truly incredible.

The 5M ISO rating is pretty funny. 1/40 f1.2 ISO 5M.


The dynamic range is so good that it will work in daylight. They even point it directly at the sun. There's some blooming, but not much.


Yes I was impressed by the sun shots. That was really incredible.

However at night the headlight fan on the road is detail-free. This could be improved.

That was what I meant saying the DR could improve.

Thinking about it some more from a technical perspective ( and I know absolutely nothing about the technology in this field ) it seems the camera really suffers from strong point light sources ( headlights, streetlamps, so on ), and I know that these sources often have specific known spectral characteristics ( color temperature and so on ) -- and I'm wondering if a way to improve the performance here is to somehow detect the spectral signatures of a range of such light sources, and somehow cap the contribution these can make to the brightness in the image...so preventing these highlights from burning out the details, and increasing the apparent dynamic range -- even when the camera is pointed down road ahead of a vehicle with headlights on.



Thanks! Just watched your link now. The camera is so sensitive that it matters whether the starlight is "overcast" or not. Which I found amusing.

I like the production values of that video. The (appropriately, for this product) triumphant music. The long blank-face titles with abstract technical terms, concrete locations and hard-boiled scene descriptions: "Human in desert". Then cut to some shot of guy walking away from the camera in the desert alone under starlight. In full color. Sort of new wave cinema.

Not at all suggesting any fakery with this next comment but I am wondering how the humans in the videos didn't trip over bushes and things. Maybe they edited that stuff out, or their natural night vision was warmed up.


They say it's hybrid IR-visible. I wonder if the trick is to use IR as the luma and then chroma-subsample by having giant pixels to catch lots of photons.


"an effective ISO rating of 5,000,000"

Holy shit, my Canon 600D is pretty bad at 2500, goes to crap at 3200, and 6400 is an absolute noise mess…


The Sony A7S Mk II can do an effective ISO close to 4096000.

http://www.kenrockwell.com/sony/a7s-ii.htm

EDIT: fixed the actual ISO.


That link says the ISO goes to 409,600, not 4,096,000.


This is an amazing device. I've taken night photos that look like frames of this movie on my digital camera, but they require a 60-second exposure and a tripod, and they're -- still frames.


With that knowledge, we can do a back of the envelope calculation for what kind of lens it would take to make your camera take a video at 10 frames per second that looked the same as its 60 second exposures.

Basically we need 600 times more light. Square root of 600 is about 25. So you'd need a lens system with an initial aperture 25 times larger in diameter than the one you used to do the long exposures.

Big lens, but if you want to brute force this kind of problem, that's the way to do it!


Cheaper to use a telescope.

It's easily possible to grind your own big telescope mirror. A big lens is like trying to build a big refracting telescope -- extremely expensive. The largest refracting telescope ever made is 100cm (40 inches). Mirrors are up to 10 meters.


Most of the telescopes are extremely dark compared to "normal" camera lenses. The brightest that I remember was the so called "camera Schmidt" with f/2.8. Normal Newtonians usually are in the f/4-f/5 range. The cassegrain and derived are around f/10. The big aperture in the telescopes is needed for the angular resolution, certainly not for the brightness.


The general physics is that the more photons there are (big aperture and/or bright source), the more opportunity you have to have high resolution.


Yes, the resolution is given by the diffraction limit of the optics that is dependent by the diameter. But resolution is different from brightness. If you tried to take a video of the same scene with a telescope chances are that the final result would have been darker, even using the same camera. Granted that it would have had a much much better resolution (but a much much smaller FOV)


So you think astronomers are too stupid to trade resolution for brightness? Hint: downsample your image: you won't lose too much as long as the dark current isn't too large.


They have completely different use cases. In one case you are making a video of a scene with a FOV of several tens of degrees. In the other you are observing objects with a maximum size of a couple of primes and with features measuring infinitely small fractions of arc seconds.


As I said before the aperture of a lens has nothing to do with the final brightness. You can have a lens 100 times bigger, but if the original lens is f/2 and the humongous one is f/2.8 the first one will be more bright.


nice pun


My brother in law experimented with this camera a few years back on family portraits. The camera picks up a lot of "dark" details. Skin displays pale and veins are very defined. My nieces called it "the vampire camera".


I've asked my dermatologist about using wide spectrum cameras to better diagnose conditions. She said its being discussed, researched.

TMI: I have an autoimmune disease affecting my skin that my care givers have attempted to track with photos over the years. It's not very effective. But with just human eye and just the right lighting, its much easier to see what's what. Sun light vs florescent, angle vs straight on.


Can you share any photos?


Here's a day time shot, which exhibits a bit of this "HDR"-effect: https://youtu.be/Y2zfG6kY7Ns


hey! seeder siddy, utarh.


There are far infrared cameras that capture thermal radiation from 9-15 NM band. They nicely allow to see in complete darkness. They do not use CCD but rather microbalometers.

But they are expensive. 640x480 can cost over 10000 USD and cameras with smaller resolution like those used in high-end cars still cost over thousand USD.


FLIR recently started selling their low-resolution uncooled sensors in small quantities: * 80x60 resolution for $200 * 160x120 resolution for $240 (doesn't seem to be in stock on Digikey though).

I've used the first one- resolution is quite low, but a few years ago you couldn't even buy such sensors as a hobbyist (example image: http://faili.wot.lv/tmp/IMG_20160320_163431.jpg)

These are the same sensors which are used in FLIR smartphone-attached thermal cameras.


Direct link to manufacturer or supplier:

https://www.x20.org/color-night-vision/


So maybe Peltier on the sensor, heat sink attached to body, body hermetically sealed. Sensors probably tested for best noise quality (probably a really low yield on that).


This is incredibly cool. You can even see how other sources of light actually have an effect on the environment as if they were their own suns.


Shutter time variation or mechanical iris closing depending on the amount of incident light to change the exposure?

Otherwise with any direct source of light in an image it would immediately overexpose (in some low light cameras that could even damage the sensor). It's super impressive.


Put one of these on a drone and you'll break a lot of people's assumptions about their privacy


Why can we see terrain much better than with a naked eye, but stars definitely worse? There isn't even a hint of Mikly Way, which should be easily visible with naked eye in a desert with zero light pollution.

Or this was shot during full moon, carefully avoiding it getting in sight? Then it is not much better than (fully adapted) naked eye.



I wonder what the sensor is made of. I would bet on there being a fair bit of Germanium in there.

PS: probably wrong about that, silicon's band gap is more suited to optical spectrum, even though germanium has more electron mobility. I'm speculating now that they're using avalanche photodiodes.

https://en.m.wikipedia.org/wiki/Avalanche_photodiode


I've dreamed of such a camera for decades. I thought the technology was at least 10+ years away. This is what science fiction is made of.


Someone should contact this company and volunteer to redesign their website (https://www.x20.org). They should also be told that "complimentary" and "complementary" don't mean the same thing.

They have a great product, unfortunately presented on a terrible website.


Why should someone volunteer to redesign their site. They are a private company, they can pay someone to do that. This isn't a charity.


> Why should someone volunteer to redesign their site.

I didn't mean volunteer without pay. You must be aware that, for missions so unpleasant as to be sanity-threatening, the military only accepts volunteers. Same idea.


I guess the idea of the comment was that the product is so cool, it is reasonable that people just want to help out gratis. Like interns feel totally fine offering to work for free at a rocketship startup.


Looks like they're aiming to sell mostly to military customers. The sales process is probably high-touch, in-person sales - a customer going to the website is probably an infrequent occurence.


They want to sell to the army so i guess their aesthetics must match those of military cars.


Here is the manufacturer's web site: https://www.x20.org/color-night-vision/

There is a 'shoot out' video on that page which compares themselves to other night vision technologies. Pretty impressive demo.


It occurs to me that this technology could do absolutely amazing things as the imager for a space telescope...


Is that red rocks (just outside of Las Vegas). There's a lot of man made light sources that really scatter light pretty far and in a lot of directions (the Luxor spotlight comes to mind). I wonder if that could have an effect on this camera's performance.


Here's the manufacturer website on it: https://www.x20.org/color-night-vision/


I want to know what this means for observational astronomy. Can we put this in the eyepiece of a telescope and discern features in nebulae that otherwise look like gray blobs to unaided vision


Telescope cameras are already way past this. This camera has the advantage of being cheaper and smaller.


Not at all. Even when cooled well below zero and without any Bayer filter I never saw something like this with less than 1/10s of exposure.


I suppose it's a matter of resolution. ARCONS is an IR photon counter with 44x46=2024 pixels. I don't know how that would correspond with a camera's ISO, but you can't get much better than single photons. I'd imagine it's useless to install in your ground telescope, but perhaps in a couple decades, the resolution will be scaled up to compete with current optics imaging methods.

https://arxiv.org/abs/1306.4674


Photon counters were not what I had in mind when you were speaking about astronomy cameras. I remember that around 20 years ago we had 1/100th of magnitude resolution with photon counters compared to 1/10th of the CCDs (SBIG &co.)


Why is the night sky blue? Is that really scattered starlight?


I can't think of any other light source.


Because air is blue.


Has no one considered a neural net post processor which has been trained on daylight views? Seems like an obvious method, given Hacker News...


What a quandary: We see military weapons technology put to terrible use all the time, and yet, so much technology shows up in (US) military use first.


I don't see the quandary. The US spends trillions (or something) on military R&D. There's nothing special about military R&D per se that make it produce better and greater technology than other reasons to do research. In fact, because of the particular focus, it probably produces quite a bit of technology that doesn't even have any non-terrible use, even if you're really creative about it.

It's a numbers game. If you throw more spaghetti at some particular wall than anybody else or any other wall, then obviously that's where most of it will stick ...

It's mostly that humans are real easy to convince to hate and want to kill eachother, as long as the numbers are large enough you can forget they're individuals. So obviously the other dominant lifeforms on this planet (super-organisms, in this case multinational corporations) exploit this loophole for control and power.


ELI5 what an iso of 5MM means?


Basically it's a measure of how much light gets into the camera, in lumens*s/m^2. It's defines the practical limit for how fast you can run a video camera and get decent exposure.


How do you measure a lumen?


Lumens are very misleading as they're entirely based on human vision sensitivity. https://en.m.wikipedia.org/wiki/Luminosity_function

Naturally, the closer to the 555nm green peak, and the more monochromatic your light source, the brighter it appears to be. This is a part of why night vision systems are green.

You measure a lumen by taking the area of your light source spectrogram in respect to the luminosity function.

I did this years ago with some manufacturers LED light output curves to reverse engineer electrical->photon efficiency vs cost from their data sheets. The main purpose was to figure out how close we were to LED lighting taking over.

I did that research back in 2012 and predicted 2015 as the point where LED was more efficient and cheaper than CFL but I was off by a year or so. Hilariously that paper got me a C- because it was only tangentially related to the teacher's pet subject and never saw the light of day.


Sensor gets more sensitive to light as you increase voltage. That's why you see all the noise as well.


Color vision at night.


Is this real? :-O


Their website is a thing of beauty. It's straight out of the Timecube school of design. https://www.x20.org/color-night-vision/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: