Hacker News new | past | comments | ask | show | jobs | submit login

Interesting how the three separate R/G/B images are taken independently a few ~seconds~ milliseconds apart, with a different color filter placed over the image sensor.

I assume this is to maximize resolution, since no Bayer interpolation [0] is needed to demosaic the output of a traditional image sensor that integrates the color filters onto the sensor pixels themselves. As these satellites are not intended to photograph things in motion, the color channel alignment artifacts seen here are a rare, small price to pay for vastly improved resolution and absence of demosaicing artifacts.

[0] https://en.m.wikipedia.org/wiki/Bayer_filter




These are pushbroom linear sensors stacked up in the focal plane. The spectral channels are physically separated by larger distance than neighboring pixels in a Bayer grid. There is a time delay between each channel sweeping over the same location that gets corrected when the final imagery is aligned. The moving plane at altitude violates assumption of a static scene and exposes the scanning behavior.


The B2 bomber photo isn't from a push boom sensor, because it would be distorted since it's moving relative to the camera. This would manifest itself as a shear along the motion vector . There is no shear in this photo, only spectral bands.

I've spent a number of years doing image orthorectification, and satellite imagery post-processing, then also worked on Google Earth and Google Maps, so I can tell you precisely what happened. This is a satellite image from a satellite which photographs different spectra at different times. There isn't enough parallax at typical orbital distance for this much displacement to come from multiple cameras. Looking at some parked cars, this looks like 20-30cm resolution per pixel, which is consistent with the newer earth imaging satellites.

Google Earth made a blog post about this a while ago (https://www.gearthblog.com/blog/archives/2015/03/planes-flig...)


Agree it wouldn’t be parallax from multiple cameras on the same satellite but the delay between images seems quite short for physical filter swapping (~20ms per filter).

(Edit: Actually DLP does it in about that amount of time, nvm)


Shame we don't know what the delay between spectra is, would be fun to work out how fast the plane is traveling.


I've seen larger aberrations from satellite captures of planes. But I've no idea how to estimate height or speed by that.


You can get height if you can find the shadow, and find the time that the photo was taken, because at any given time and date, you can compute sun position, and you know the precise lat/lng of the shadow. The shadow would need to be in the same photo, though. (actually, it's a bit to the northwest, just found it).

If you can find a vertical structure, like an antenna mast, you can measure shadow length and direction and if you know the height of the antenna, you have enough information for some trigonometry.

Speed is harder. You'd need to know the difference between exposures.


For those interested, here is the shadow: https://www.google.com/maps/place/39%C2%B001'28.8%22N+93%C2%...

Took me a while to find it, so thought I'd save someone else the scrolling/search cost.


Thanks! It's conveniently hidden by a grassy area and some trees which throw shadows of their own. Actually, I should be doing something more useful at the moment...


Aha! Here we go. Here's a telephone pole: https://earth.google.com/web/search/39.0303298,-93.58495/@39...

We know that telephone poles stick about 32 feet from the ground. Measuring the shadow in Google Earth above, it's about 20 feet. Now, it's a simple ratio.

Using Earth, I measured the distance from the plane to its shadow at about 2430 feet, that would put it at 3,900ft, which is surprisingly low, but I don't know the height of that pole for sure. It's the right ballpark, though.


Not too low if landing? From another comment:

> This one was probably on final approach during the summer when the winds are usually out of the south and was probably about 2,000 feet above the ground.

https://news.ycombinator.com/item?id=29628167


Probably headed to Whiteman AFB. Here's one parked: https://www.google.com/maps/place/38%C2%B043'29.7%22N+93%C2%...


The B2 is based out of Whiteman AFB near Warrensburg, MO so roughly 25 miles or 40 km south of where the plane is so it is probably on its final approach to land or just took off.


If it’s anything like space telescopes there are more than 3 spectral filters that can be selected which rules out a static Bayer-type filter system.


For anyone wondering about what a pushboom sensor is - http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.72....

Basically a 1D sensor that acquires a 2D image because the camera itself is moving.

In this case, I'd imagine that there are 3 such sensors for RGB.


FWIW, depending on the satellite (this one is probably WorldView 3), there usually more like ~6-7 channels in the visible.

It makes for easier top-of-atmosphere correction, and can be useful for things other than pretty pictures.


> Basically a 1D sensor that acquires a 2D image because the camera itself is moving.

Or a static camera and a moving subject. This same style of camera is used for "photo finishes" in races. The camera is literally the finish line, and the the first pixel to "cross the line" is the winner.


Also office scanners. Basically, it's still a 2D image, but instead of two spatial dimensions, it has a spacial dimension and a time dimension.


Is CMOS a type of pushboom sensor?


CMOS is a manufacturing method for integrated circuits.

Any type of sensor could be CMOS.


I haven't seen a CCD CMOS, yet, and I don't _think_ microbolometers are CMOS-based (and wouldn't be surprised if no one publically tried). Happy to see some exotic examples, though. Those are always intriguing the hacker curiosity.


Thanks for answering!


Most CMOS imaging sensors in digital cameras do scan, but not like a push-broom sensor. The push-broom sensor only has a single line of pixels that physically moves (or that something moves across), like in a scanner or a photocopier. But as others have said, those sensors might also be fabricated with CMOS technology.

Whereas your normal digital camera has a fixed rectangular array of pixels, and the scanning is just reading them progressively (what's called 'rolling shutter') because it's hard to read them all quickly enough at once. Some more expensive sensors do 'global shutter' which usually has better motion characteristics for video.


Is the R/G/B separation greater because these sensors are built like TDI CCD lines?

edit: Seems like pushbroom is the name for TDI when it's put in a satellite.


Exactly! I'm working on some stuff now to exploit this very effect, the small time difference in the bands can be used to find velocities of moving objects.


The same effect can be used to estimate cloud heights, and to identify aircraft flying above clouds.

Parallax based Cloud Detection for Sentinel-2 Analysis Ready Data Generation https://www.researchgate.net/publication/332999809_Parallax_...

Aircraft Detection above Clouds by Sentinel-2 MSI Parallax https://www.mdpi.com/2072-4292/13/15/3016


Is this at all analogous to how we used red/blue shift to determine the speed at which astronomical objects are moving towards/away from us?


No, that’s the Doppler effect. This is a time delay between when different colors were captured.


Ah gotcha. Makes sense. Thanks!


This is incredibly cool. What's the application (if it's possible to obscure whatever details you don't want to divulge)?


Its coastal physics work, you can infer near shore conditions by watching wave propagation.


So... like where submarines are based on water disturbances?


That's not what I'm doing but it seems possible you could watch refraction patterns and tell that there are submerged objects, but you'd need to know the water depth without that object there, as there'd be no great way to tell if the object is a submarine or a pile of sand. Also, most of these satellites pass over fairly rarely so wouldn't be super useful for tracking things that move. I'm mostly interested in finding the piles of sand :)


So... does anyone have a tool handy to correct the frames?

Better yet... does anyone have the corrected image?


I must say there is quite some beauty in the choice and structure of words here.


Funnily enough, I thought exactly the same when reading the parent post ! Grateful to be in such well-spoken company :)


I think modern satellites have matrix sensors


It depends on the satellite. Weather satellites in GEO do because they aren't moving relative to the surface and they have time constraints for scanning the full earth disc. Higher resolution imaging satellites use linear sensors. However, they use TDI imaging which at a low level makes them superficially like an area array with limited vertical resolution but the light collection is still fundamentally different.


I don’t think this is the case as we don’t see any rolling shutter distortion. The linear sensor scans just like a shutter curtain exhibiting the same distortion. The plane is proportioned correctly. Maybe the scan time is fast enough.


My napkin calculations are that the three channels are ~10ms apart, given the bomber is 69 feet nose-to-tail* and an assumption of 400 mph velocity.

* http://www.dept.aoe.vt.edu/~mason/Mason_f/B2Spr09.pdf


The google maps scale shows the plane as being pretty close to that length as well. I think the satellite is high enough up that altitude scaling for aircraft is going to be minimal.

I see about a 10 foot separation between colors.

Velocity = distance / time

400 mph is 587 fps.

10 / 587 = 17 ms


Do they actually use satellites for their "satellite" view? I always assumed that it was done by aircraft.


Depends on the needed resolution. You can cover a lot more earth, far more cheaply with a satellite with several meter wide pixels.

When I used to work in remote sensing I remember that Bing Maps had a very distinct resolution transition from spacecraft to aircraft. Aircraft coverage maps were a small subset of the total earth land area, but most US towns had some coverage at that time (~2012).

As you zoomed in, you could tell when Bing switched capture methods because the aircraft cameras were clearly off-nadir, and you'd see off-angle perception differences.


They only use aircraft around larger urban areas. Satellite imagery is used for rural areas. You can see the resolution difference by seeing how far you can zoom in.

If you zoom in on a road near the pinned location for this post, and then go to, say, downtown kansas city, and zoom in you'll see a pretty significant difference.


Flying a plane around the planet sounds a lot more expensive financially and thermodynamically than just beaming down images from an object constantly falling into the views you want to see.


You are only interested in urban areas which is less than 1% of total earth surface


It's also the case that often there are more legal regulations around aerial imagery compared to satellite imagery.


I looked it up. There are some satellite images, but a lot of it is done by aircraft.


Or you could just strap a camera to a few commercial airliners and have full coverage.


You'd only have full coverage of the jet routes. They don't just fly willy nilly across the globe.


It's definitely a mix, depending on the area and the resolution. For urban places where they have 3D views and higher resolution, it's definitely planes. For a random place like this at low resolution, it's satellites.


One might be able to calibrate this by using cars in the area close to the shot [1]. Assuming most cars are driving at the speed limit

[1] https://www.google.com/maps/place/38%C2%B059'37.1%22N+93%C2%...


The cars aren't flying at altitude though. They're farther from the satellite.


How low are these satellites? I wouldn't expect the altitude of a plane to be any significant proportion of the altitude of a satellite.


The satellites are probably greater than 400 km altitude, or so, while the plane is probably 10 km. The plane is in the same rough order of magnitude as really tall mountains, which the satellite is presumably designed to compensate for, so I agree with your general assessment.


The Google Maps image copyright is for Maxar, which means the collector is likely Worldview-3, which is at nominally 614 km above the Earth.

https://earth.esa.int/eogateway/missions/worldview-3


The shadow is 750 m away and the sun is about 15 degrees off noon, so the B2 altitude is roughly 3 km. Cousin comment says the satellite altitude is 614 km, so I wouldn't expect parallax effects to be dominant. This is also evidenced by the fact that both the plane and ground are in focus and the plane is measured to be 70 feet long in google maps, which is what it is specified to be in reality.


That bomber is also likely quite low, since it's based out of an air force base in that area.


Sure, but when the word "calibrate" comes up, you want to account for even the not-very-significant factors.


I would assume that the plane or satellite moving affects this too, it's presumably being corrected but there might be some parallax.


You're absolutely correct, but as the aircraft is much closer to the ground than the imaging satellite I disregarded it. Spherical cows and all that :) I'm sure a military analyst would do the calculations!


It actually works out to be pretty close. A satellite flying ~17k mph at 250 miles altitude would largely have the same angular displacement from the ground as a plane moving 400 mph at 31,000 feet. Most satellites fly in an easterly direction, so the question would ultimately be what the inclination of the orbit is relative to the direction of flight.


I think this plane is flying much lower and slower. The B2s are based at Whiteman AFB, which is about 25 miles south of this photograph. The prevailing winds in this part of the country are from the south, so I suspect that it is about to turn onto the final leg of its approach for a landing to the south. I’d guess that it’s no more than 8-10K’ and traveling at a 2-300 knots max.


Definitely. Rule of thumb in a jet is to slow down if you're closer than 3x the altitude. So at 8k feet you want to be at least 24 nautical miles (27 normal miles) away or if closer you want to be slowing down (or descending). If you don't slow down before you're going to have to do so in the descend and typical jets can't really slow down much in that.


I asked google maps to get directions from there to Whiteman AFB. Unfortunately, flight directions were not available.


And you are now on a list...


I've been on a list for so so long now that this one is just another. No big deal.


Hey welcome to the list, I've been on it since Primary School believe it or not, but only found out a few years ago!


How did you find out?


Most imaging satellites are in sun synchronous orbits so likely in this case moving north or south and slightly retrograde e.g. west.

I'd bet orbital motion is negligible here because the distortion seems to the eye to be entirely in the direction of the plane's apparent motion vector and I don't see any significant skew to that.


That's super interesting! I love this discussion. Would the satellite be that low though? 250 miles (400km) is still going to experience significant atmospheric drag. Fine for a cheap mass deployment like Starlink, but I'd expect an imaging satellite to be set up for a longer mission duration. Then again maybe the cost of additional mass for station-keeping is worth it for the imaging quality.


You can find a list of satellites here: https://www.ucsusa.org/resources/satellite-database

According to the image on Google maps it was taken by Maxar. Maxar appears to have a few satellites, looks like 5 in geostationary orbit (~35,700km), 3 at about 400km and 1 at about 500km.



Looks like you nailed it! ctrl+f pushbroom in the thread if you haven’t seen it.


Well no the speed of the satellite is irrelevant there, the background is not getting distorded at all


Isn't the speed of the sat irrelevant as it should correct for this anyways for all images it takes?


yeah, exactly


Interesting, I would have made the opposite assumption ("the satellite is moving so much faster than everything else, we can estimate the interval from the satellite's speed alone). It seems both speeds might have a similar impact, as per sibling comment.


Can you calculate the time of day from the shadow's length as well?

https://www.google.com/maps/place/39%C2%B001'18.5%22N+93%C2%...


Interesting. A search for those gradients could help find all fast moving objects in the satellite images.


A guy did it with trucks for Sentinel-2 data. https://www.esa.int/Applications/Observing_the_Earth/Monitor...


Is there a dataset with all the close-up satellite images somewhere?



What an awful picture of the NRO Director. Surely they have plenty of experience in photography in less than ideal conditions. So why have this photo as the first thing I can see on their website?


It’s so bad it seems unlikely it isn’t a joke. I mean look at their mission patches: https://en.wikipedia.org/wiki/List_of_NRO_launches


The juxtaposition of the Ursa Major and Onyx/Vega patches almost gave me whiplash. Judging by the list of patches it seems like NRO is a fun place to work. Thanks for sharing!


Fun is maybe not quite the word I'd use. The juxtaposition of all the wording about "defense" and menacing-looking characters that would definitely be the villain and not the hero in any comic book or movie is... unsettling. Even if it's a self-aware joke about their public image (like the squid giving the whole Earth the hug of death), it's maybe gone a bit too far?


The gorilla holding the US flag is my favourite. And it might just be me, but the Nemesis badge has to be a sphincter, right? It reminds me of the Greendale flag in community.


I like how they quit looking like patches and just turned into stickers. Even with a black budget, they still had to cut corners and reduce spending on incidentals too?


Mine is definitely L-66 https://en.wikipedia.org/wiki/File:NROL-66_Patch.png

Not a huge fan of the vector art badges though, I want some embroidery!


Plot twist - that was taken from geosynchronous orbit.


It's 2021, give them a break. Corporate photos have been non-professional, taken by the wife in the bathroom style for 18months now.


Now that’s an interesting idea. Anyone care to spin this up?


Is there a way to subscribe to comments? I'd love to see the result of this idea but am not savvy enough to spin it up myself very easily


Click the timestamp for the comment, then click "favorite", then set a reminder in your calendar to check your favorited comments (link in your profile).


Can Siri work with whats in your copy/paste cache?


What do you mean by “copy/paste cache”? If you mean the clipboard history, there’s no such thing on macOS. If you mean the clipboard (= the last thing you copied), Siri can work with the Shortcuts app, which itself can read the clipboard.


I mean like "create a reminder to check link from my clipboard tomorrow"


That's a super interesting idea. I would love to see something like this.


/subscribe


Shadow of this object is also distorted: https://goo.gl/maps/VWwtt1PbLJVkbiBS9


Thank you. I was wondering where that was. Now, can someone calculate the plane's altitude?


The shadow is about 2500 feet away. Judging from nearby houses the sun is probably about 15 degrees SouthEast of noon.

  2500 ft * sin( 15 ) = 650 ft
Someone should double-check my geometry though.


I think you need to divide by tan(15) instead, which would give 9330 ft.


Wouldn't that be the 3D distance between the shadow and the plane?

https://i.imgur.com/HWA1iZu.png

Edit: nope you're totally right. Brain fart. It's toa, not soa.


Doing it this way allows not only higher resolution, but you can also use different types of filter for specialist applications. Infrared for example, or some kind of narrow band pass filters etc.


Unless you aren’t military and then no infrared for you


Nonsense. Landsat 8 bands 5, 6, and 7 are infrared bands (5 is NIR, and 6 & 7 are shortwave or SWIR), and bands 10 and 11 are lower-resolution thermal infrared (TIRS).

Source: http://www.usgs.gov/faqs/what-are-band-designations-landsat-...


I literally have a 720nm lowpass infrared filter in front of me for use in photography.

Infrared imaging is not "military", it's public, common, and legal.


I think what they meant is prior to the "digital farming revolution" if you called up a satellite/arial imagery company and requested anything other than visible spectrum, you'd not only get some strange looks - they'd also alert the government.


Oh? Planet offers infrared imaging to farmers.


I grabbed my red and blue glasses because this vaguely looked like a 3D anaglyph to me, and while it's not the clearest such image, it definitely appears as a cohesive object hovering above a flat map. Further I zoom out, it definitely looks like it. Inadvertent byproduct of however this was photographed or stitched?


Is it possible this is a a pushbroom sensor, where red, green, and blue sensors are moved along the path of the plane or satellite that's taking the image? My understanding is those have a higher effective resolution than frame-based sensors, which would make sense here.


Seems unlikely to me. With push broom, I'd think you'd get characteristic squishing, stretching, or skewing of objects in motion depending on whether the object was moving against, with, or perpendicular to the push broom path. In this case with the imaging satellite probably in polar orbit and the plane flying east, you likely see the plane horizontally skewed as each horizontal row of pixels captured the plane further east.


Also came here to discuss this. Super interesting opportunity to reverse what they're doing. Yeah I'd agree it's color filters which would avoid duplicating lenses (probably the most expensive component) or dividing the light energy. Filters would give you the most photons per color per shot. I wonder if the filters are mechanical or electronic somehow. As another commenter calculated, it's 10ms between shots, which is not that impractical to move physical filters around when you consider modern consumer camera shutter speeds of around 0.125 milliseconds max.


This is what we do all the time for astrophography! We buy monochrome cameras without an IR filter and take 3 photos and recombine them.


Your guess is exactly right, this type of imaging is pretty common with satellite imaging for the reasons you mentioned. Scott Manley talks about it in this video iirc https://youtu.be/Wah1DbFVFiY


I wonder if there are ways to get superior (compared to Bayer) color images from conventional raw-capable monochrome digital cameras (like Leica Q2 Monochrom) in static scenes using similar workflows of taking three separate shots.


Many cameras, especially medium format like Fuji GFX, can do "pixel shift" capture, where they use the sensor stabilization to shift the sensor by one pixel at a time to ensure red, green, and blue are each captured at all pixels, without needing demosaicing. The camera has to be mounted on a tripod for this to work - like any stacking requires, really.


Initially I thought pixel shift might be the way to go, but stumbling across [0] I got an impression it isn’t necessarily as great as it sounds (and, I guess, neither would be any three-shot RGB-filter-swapping composite).

[0] https://www.dearsusan.net/2020/11/09/nyquist-mtf-pixel-shift...


Absolutely! [0]

In fact, the first color photos in the world were taken this way in the early 1900s [1]: take three monochrome exposures onto black and white transparencies with R/G/B filters, and then project the three images together.

[0] https://leicarumors.com/2020/01/29/color-photography-with-th...

[1] https://en.m.wikipedia.org/wiki/Sergey_Prokudin-Gorsky


Interesting. Sadly, I believe the Photoshop approach would not produce a DNG file that you could normally interpret in a raw processor of your choice. Looks like pixel shift cameras (which apparently still use Bayer sensors, just moving them around, to debayer at capture time so to speak) is the most practical option at the moment.

I’m curious if it is technically possible for a digital sensor to capture the entire light spectrum of the scene, without the need for RGB separation at any stage—similar to Lippmann plate[0].

[0] https://en.m.wikipedia.org/wiki/Lippmann_plate


There are Sigma cameras with a Foveon sensor that can do it:

https://en.wikipedia.org/wiki/Foveon_X3_sensor

Though they're not as good in most other dimensions as conventional Bayer filter sensors (most Amazon reviews e.g. say that it's impossible to photograph moving human subjects without daylight iirc).


I looked at Foveon before, and got the impression that it still effectively relies on RGB separation/recombination, though in a more elegant way than Bayer sensors. (That said, Sigma’s doing some really cool stuff.)


Sergey Prokudin-Gorsky‘s work shows both the benefits and drawbacks of the swapping filters approach


[1909]


It's standard when using reflectic/catadioptic objectives to peek back at where the satellites are: https://www.astroshop.eu/filter-wheels-filter-sliders/starli... This one has USB, the same style with manual operation via a teethed wheel is like 169 EUR at that shop. Or consider rigging a burnt-out DLP projector's filter wheel up, which comes with a motor and encoder that you could use to capture imagery with the filters they provide at much higher frame rates (a few hundred fps in the monochrome domain seem usual these days, AFAIK, for good consumer DLP projectors).


Love both ideas…


The first imaging driver that I wrote was for an HD frame store that had a camera with a spinning filter disk.


That sounds very interesting. I feel like firmware/hardware automation combo that swaps RGB filters and takes exposures in accord could provide interesting workflows for owners of those monochrome digital cameras.


I wouldn't be surprised if these military bombing dudes not only took R, G, B, but also near IR, far IR, UV, and various other things. If they did it would make no sense to try to make a Bayer filter for all of that and rather just have multiple filtered cameras pointed at the same direction.


Different wave lengths travel at different speeds through glass and air and focus differently even on the same lens when color photos are taken in the single frame.

https://en.m.wikipedia.org/wiki/Chromatic_aberration


True but if that were the case here it would also likely affect the ground.


You can use this difference in time to calculate the speed (although you also have to account for parallax for an aircraft)

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6651096/


There is plenty of RGB planes on Google Maps https://duckduckgo.com/?q=rgb+plane+on+google+maps+!gi


Actually the camera is (assuming geostationary orbits arę not used for imaging) moving ar speeds much much faster than the plane


That's a lot of words to rationalize away what is very clearly and obviously a cloaking device at work!!


Fun fact: this is called a Harris Shutter effect when it's done for art purposes.


First RED, then BLUE, and last but not least GREEN!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: