Interesting how the three separate R/G/B images are taken independently a few ~seconds~ milliseconds apart, with a different color filter placed over the image sensor.
I assume this is to maximize resolution, since no Bayer interpolation [0] is needed to demosaic the output of a traditional image sensor that integrates the color filters onto the sensor pixels themselves. As these satellites are not intended to photograph things in motion, the color channel alignment artifacts seen here are a rare, small price to pay for vastly improved resolution and absence of demosaicing artifacts.
These are pushbroom linear sensors stacked up in the focal plane. The spectral channels are physically separated by larger distance than neighboring pixels in a Bayer grid. There is a time delay between each channel sweeping over the same location that gets corrected when the final imagery is aligned. The moving plane at altitude violates assumption of a static scene and exposes the scanning behavior.
The B2 bomber photo isn't from a push boom sensor, because it would be distorted since it's moving relative to the camera. This would manifest itself as a shear along the motion vector . There is no shear in this photo, only spectral bands.
I've spent a number of years doing image orthorectification, and satellite imagery post-processing, then also worked on Google Earth and Google Maps, so I can tell you precisely what happened. This is a satellite image from a satellite which photographs different spectra at different times. There isn't enough parallax at typical orbital distance for this much displacement to come from multiple cameras. Looking at some parked cars, this looks like 20-30cm resolution per pixel, which is consistent with the newer earth imaging satellites.
Agree it wouldn’t be parallax from multiple cameras on the same satellite but the delay between images seems quite short for physical filter swapping (~20ms per filter).
(Edit: Actually DLP does it in about that amount of time, nvm)
You can get height if you can find the shadow, and find the time that the photo was taken, because at any given time and date, you can compute sun position, and you know the precise lat/lng of the shadow. The shadow would need to be in the same photo, though. (actually, it's a bit to the northwest, just found it).
If you can find a vertical structure, like an antenna mast, you can measure shadow length and direction and if you know the height of the antenna, you have enough information for some trigonometry.
Speed is harder. You'd need to know the difference between exposures.
Thanks! It's conveniently hidden by a grassy area and some trees which throw shadows of their own. Actually, I should be doing something more useful at the moment...
We know that telephone poles stick about 32 feet from the ground. Measuring the shadow in Google Earth above, it's about 20 feet. Now, it's a simple ratio.
Using Earth, I measured the distance from the plane to its shadow at about 2430 feet, that would put it at 3,900ft, which is surprisingly low, but I don't know the height of that pole for sure. It's the right ballpark, though.
> This one was probably on final approach during the summer when the winds are usually out of the south and was probably about 2,000 feet above the ground.
The B2 is based out of Whiteman AFB near Warrensburg, MO so roughly 25 miles or 40 km south of where the plane is so it is probably on its final approach to land or just took off.
> Basically a 1D sensor that acquires a 2D image because the camera itself is moving.
Or a static camera and a moving subject. This same style of camera is used for "photo finishes" in races. The camera is literally the finish line, and the the first pixel to "cross the line" is the winner.
I haven't seen a CCD CMOS, yet, and I don't _think_ microbolometers are CMOS-based (and wouldn't be surprised if no one publically tried).
Happy to see some exotic examples, though.
Those are always intriguing the hacker curiosity.
Most CMOS imaging sensors in digital cameras do scan, but not like a push-broom sensor. The push-broom sensor only has a single line of pixels that physically moves (or that something moves across), like in a scanner or a photocopier. But as others have said, those sensors might also be fabricated with CMOS technology.
Whereas your normal digital camera has a fixed rectangular array of pixels, and the scanning is just reading them progressively (what's called 'rolling shutter') because it's hard to read them all quickly enough at once. Some more expensive sensors do 'global shutter' which usually has better motion characteristics for video.
Exactly! I'm working on some stuff now to exploit this very effect, the small time difference in the bands can be used to find velocities of moving objects.
That's not what I'm doing but it seems possible you could watch refraction patterns and tell that there are submerged objects, but you'd need to know the water depth without that object there, as there'd be no great way to tell if the object is a submarine or a pile of sand. Also, most of these satellites pass over fairly rarely so wouldn't be super useful for tracking things that move. I'm mostly interested in finding the piles of sand :)
It depends on the satellite. Weather satellites in GEO do because they aren't moving relative to the surface and they have time constraints for scanning the full earth disc. Higher resolution imaging satellites use linear sensors. However, they use TDI imaging which at a low level makes them superficially like an area array with limited vertical resolution but the light collection is still fundamentally different.
I don’t think this is the case as we don’t see any rolling shutter distortion. The linear sensor scans just like a shutter curtain exhibiting the same distortion. The plane is proportioned correctly. Maybe the scan time is fast enough.
The google maps scale shows the plane as being pretty close to that length as well. I think the satellite is high enough up that altitude scaling for aircraft is going to be minimal.
Depends on the needed resolution. You can cover a lot more earth, far more cheaply with a satellite with several meter wide pixels.
When I used to work in remote sensing I remember that Bing Maps had a very distinct resolution transition from spacecraft to aircraft. Aircraft coverage maps were a small subset of the total earth land area, but most US towns had some coverage at that time (~2012).
As you zoomed in, you could tell when Bing switched capture methods because the aircraft cameras were clearly off-nadir, and you'd see off-angle perception differences.
They only use aircraft around larger urban areas. Satellite imagery is used for rural areas. You can see the resolution difference by seeing how far you can zoom in.
If you zoom in on a road near the pinned location for this post, and then go to, say, downtown kansas city, and zoom in you'll see a pretty significant difference.
Flying a plane around the planet sounds a lot more expensive financially and thermodynamically than just beaming down images from an object constantly falling into the views you want to see.
It's definitely a mix, depending on the area and the resolution. For urban places where they have 3D views and higher resolution, it's definitely planes. For a random place like this at low resolution, it's satellites.
The satellites are probably greater than 400 km altitude, or so, while the plane is probably 10 km. The plane is in the same rough order of magnitude as really tall mountains, which the satellite is presumably designed to compensate for, so I agree with your general assessment.
The shadow is 750 m away and the sun is about 15 degrees off noon, so the B2 altitude is roughly 3 km. Cousin comment says the satellite altitude is 614 km, so I wouldn't expect parallax effects to be dominant. This is also evidenced by the fact that both the plane and ground are in focus and the plane is measured to be 70 feet long in google maps, which is what it is specified to be in reality.
You're absolutely correct, but as the aircraft is much closer to the ground than the imaging satellite I disregarded it. Spherical cows and all that :) I'm sure a military analyst would do the calculations!
It actually works out to be pretty close. A satellite flying ~17k mph at 250 miles altitude would largely have the same angular displacement from the ground as a plane moving 400 mph at 31,000 feet. Most satellites fly in an easterly direction, so the question would ultimately be what the inclination of the orbit is relative to the direction of flight.
I think this plane is flying much lower and slower. The B2s are based at Whiteman AFB, which is about 25 miles south of this photograph. The prevailing winds in this part of the country are from the south, so I suspect that it is about to turn onto the final leg of its approach for a landing to the south. I’d guess that it’s no more than 8-10K’ and traveling at a 2-300 knots max.
Definitely. Rule of thumb in a jet is to slow down if you're closer than 3x the altitude. So at 8k feet you want to be at least 24 nautical miles (27 normal miles) away or if closer you want to be slowing down (or descending). If you don't slow down before you're going to have to do so in the descend and typical jets can't really slow down much in that.
Most imaging satellites are in sun synchronous orbits so likely in this case moving north or south and slightly retrograde e.g. west.
I'd bet orbital motion is negligible here because the distortion seems to the eye to be entirely in the direction of the plane's apparent motion vector and I don't see any significant skew to that.
That's super interesting! I love this discussion. Would the satellite be that low though? 250 miles (400km) is still going to experience significant atmospheric drag. Fine for a cheap mass deployment like Starlink, but I'd expect an imaging satellite to be set up for a longer mission duration. Then again maybe the cost of additional mass for station-keeping is worth it for the imaging quality.
According to the image on Google maps it was taken by Maxar. Maxar appears to have a few satellites, looks like 5 in geostationary orbit (~35,700km), 3 at about 400km and 1 at about 500km.
Interesting, I would have made the opposite assumption ("the satellite is moving so much faster than everything else, we can estimate the interval from the satellite's speed alone). It seems both speeds might have a similar impact, as per sibling comment.
What an awful picture of the NRO Director. Surely they have plenty of experience in photography in less than ideal conditions. So why have this photo as the first thing I can see on their website?
The juxtaposition of the Ursa Major and Onyx/Vega patches almost gave me whiplash. Judging by the list of patches it seems like NRO is a fun place to work. Thanks for sharing!
Fun is maybe not quite the word I'd use. The juxtaposition of all the wording about "defense" and menacing-looking characters that would definitely be the villain and not the hero in any comic book or movie is... unsettling. Even if it's a self-aware joke about their public image (like the squid giving the whole Earth the hug of death), it's maybe gone a bit too far?
The gorilla holding the US flag is my favourite. And it might just be me, but the Nemesis badge has to be a sphincter, right? It reminds me of the Greendale flag in community.
I like how they quit looking like patches and just turned into stickers. Even with a black budget, they still had to cut corners and reduce spending on incidentals too?
Click the timestamp for the comment, then click "favorite", then set a reminder in your calendar to check your favorited comments (link in your profile).
What do you mean by “copy/paste cache”? If you mean the clipboard history, there’s no such thing on macOS. If you mean the clipboard (= the last thing you copied), Siri can work with the Shortcuts app, which itself can read the clipboard.
Doing it this way allows not only higher resolution, but you can also use different types of filter for specialist applications. Infrared for example, or some kind of narrow band pass filters etc.
Nonsense. Landsat 8 bands 5, 6, and 7 are infrared bands (5 is NIR, and 6 & 7 are shortwave or SWIR), and bands 10 and 11 are lower-resolution thermal infrared (TIRS).
I think what they meant is prior to the "digital farming revolution" if you called up a satellite/arial imagery company and requested anything other than visible spectrum, you'd not only get some strange looks - they'd also alert the government.
I grabbed my red and blue glasses because this vaguely looked like a 3D anaglyph to me, and while it's not the clearest such image, it definitely appears as a cohesive object hovering above a flat map. Further I zoom out, it definitely looks like it. Inadvertent byproduct of however this was photographed or stitched?
Is it possible this is a a pushbroom sensor, where red, green, and blue sensors are moved along the path of the plane or satellite that's taking the image? My understanding is those have a higher effective resolution than frame-based sensors, which would make sense here.
Seems unlikely to me. With push broom, I'd think you'd get characteristic squishing, stretching, or skewing of objects in motion depending on whether the object was moving against, with, or perpendicular to the push broom path. In this case with the imaging satellite probably in polar orbit and the plane flying east, you likely see the plane horizontally skewed as each horizontal row of pixels captured the plane further east.
Also came here to discuss this. Super interesting opportunity to reverse what they're doing. Yeah I'd agree it's color filters which would avoid duplicating lenses (probably the most expensive component) or dividing the light energy. Filters would give you the most photons per color per shot. I wonder if the filters are mechanical or electronic somehow. As another commenter calculated, it's 10ms between shots, which is not that impractical to move physical filters around when you consider modern consumer camera shutter speeds of around 0.125 milliseconds max.
Your guess is exactly right, this type of imaging is pretty common with satellite imaging for the reasons you mentioned. Scott Manley talks about it in this video iirc https://youtu.be/Wah1DbFVFiY
I wonder if there are ways to get superior (compared to Bayer) color images from conventional raw-capable monochrome digital cameras (like Leica Q2 Monochrom) in static scenes using similar workflows of taking three separate shots.
Many cameras, especially medium format like Fuji GFX, can do "pixel shift" capture, where they use the sensor stabilization to shift the sensor by one pixel at a time to ensure red, green, and blue are each captured at all pixels, without needing demosaicing. The camera has to be mounted on a tripod for this to work - like any stacking requires, really.
Initially I thought pixel shift might be the way to go, but stumbling across [0] I got an impression it isn’t necessarily as great as it sounds (and, I guess, neither would be any three-shot RGB-filter-swapping composite).
In fact, the first color photos in the world were taken this way in the early 1900s [1]: take three monochrome exposures onto black and white transparencies with R/G/B filters, and then project the three images together.
Interesting. Sadly, I believe the Photoshop approach would not produce a DNG file that you could normally interpret in a raw processor of your choice. Looks like pixel shift cameras (which apparently still use Bayer sensors, just moving them around, to debayer at capture time so to speak) is the most practical option at the moment.
I’m curious if it is technically possible for a digital sensor to capture the entire light spectrum of the scene, without the need for RGB separation at any stage—similar to Lippmann plate[0].
Though they're not as good in most other dimensions as conventional Bayer filter sensors (most Amazon reviews e.g. say that it's impossible to photograph moving human subjects without daylight iirc).
I looked at Foveon before, and got the impression that it still effectively relies on RGB separation/recombination, though in a more elegant way than Bayer sensors. (That said, Sigma’s doing some really cool stuff.)
It's standard when using reflectic/catadioptic objectives to peek back at where the satellites are: https://www.astroshop.eu/filter-wheels-filter-sliders/starli...
This one has USB, the same style with manual operation via a teethed wheel is like 169 EUR at that shop.
Or consider rigging a burnt-out DLP projector's filter wheel up, which comes with a motor and encoder that you could use to capture imagery with the filters they provide at much higher frame rates (a few hundred fps in the monochrome domain seem usual these days, AFAIK, for good consumer DLP projectors).
That sounds very interesting. I feel like firmware/hardware automation combo that swaps RGB filters and takes exposures in accord could provide interesting workflows for owners of those monochrome digital cameras.
I wouldn't be surprised if these military bombing dudes not only took R, G, B, but also near IR, far IR, UV, and various other things. If they did it would make no sense to try to make a Bayer filter for all of that and rather just have multiple filtered cameras pointed at the same direction.
Different wave lengths travel at different speeds through glass and air and focus differently even on the same lens when color photos are taken in the single frame.
I assume this is to maximize resolution, since no Bayer interpolation [0] is needed to demosaic the output of a traditional image sensor that integrates the color filters onto the sensor pixels themselves. As these satellites are not intended to photograph things in motion, the color channel alignment artifacts seen here are a rare, small price to pay for vastly improved resolution and absence of demosaicing artifacts.
[0] https://en.m.wikipedia.org/wiki/Bayer_filter