If you do want something that can look inside a building and determine the positions of people and objects, look up wifi
slam [0] which was acquired by apple. It can use the wifi signals to map out the inside of a building and determine the location of objects inside. About a decade later, thanks to neural networks and other techniques, they can use wifi to determine pose information, they can create a rough skeletal model of a person that moves as you move. [1] The 2020 enhancements are a decent upgrade. [2]
Setting obvious privacy implications aside, I'm surprised how good the animations look. I wonder if this could be developed further to enable cheap motion capture solutions for film/game studios. If it works in real-time then coupled with a game engine it could give you a direct in-camera look on what a cgi heavy scene might look like, without requiring (too) fancy hardware and also not adding distracting jump suits and reflective tennis balls and dot makeup all over the place.
The Mandalorian production comes to mind, they've been pushing this kind of direction hard.
"... It helps to think of it just as your brain’s interpretation of a two dimensional representation of the coherent sum of backscatter responses from electromagnetic waves..."
Well that clears it right up, thanks! Lol.
I'm pleased that they think any general public person or even journalist is capable of seeing the difference between the left and right figure in the article.
I find it incredibly endearing. Obviously it's not ideal in terms of public communication, but there's something wonderfully earnest about the way a lot of domain experts just kind of forget that most people don't share their obsession, let alone their knowledge.
To be fair he did point to the more details for another article from the company website [1].
Since according to Einstein "you can make it simpler but no simpler", me think he and his colleague did pretty well there if we refer to the reduced complexity equation provided here [2]. To be honest this simplified coherence estimation equation in words literally looks like the most complicated equation in words that I have ever seen (page 34 in [2]) - imagine the details elaboration for the mathematical equations but bear in mind that this is for the more sophisticated multi-static radar:
Coherence Estimation = [Radar equation system noise (gammaSNR)] x [Quanization Noise (gammaQuant)] x [Ambiguities (gammaAmb)] x [Baseline Decorrelation(gammaGeo) x [Doppler Decorrelation(gammaAz)] x [Volume Decorrelation(gammaVol)] x [Temporal Decorrelation(gammaTemp)] x [Processing & Coregistration Errors(gammaProc)]
Well then give me a real reason why it should not be possible. (Basically what they only say is why the appearance of penetrating building cannot be interpreted so easily)
Yes, they claim the 'laws of physics' prevent that. They should specify which law exactly. Maybe too low SNR? I know for a fact you can use radar to see through walls.
radar is a technique, not a specific thing. Their radar is based in the X-band (9.5 GHz) which is known not to have the ability to penetrate much. In fact, while C-band SAR (Sentinel 1) can partially penetrate forest canopies, and the top few cm of soil, X-band waves can hardly do this and basically just provide a surface scattering.
Well, thats splitting words a bit, isnt it? In that sense radio would be a technique not a thing (there is long wave, short wave, ukw, etc), or PCs (there is 8086, 80386,...) with some you can do run intensive algorithms in a specific time, with other you cant. Or what about cars?
Synthetic Aperture Radar. What you're really seeing here is what happens when you try to put together a 3D image from an angled line scan from a moving scanner. There isn't enough information to do this perfectly, so you get artifacts on tall objects. You can see somewhat similar artifacts in Google Earth when looking down on tall objects for which they don't have full elevation data, like trees.
What's impressive is that they're getting 50cm resolution from orbit. Doing this from aircraft is nothing new; that's been going on for decades. But 50cm from a satellite? That's an achievement.
In some sense achieving high resolution imagery from orbit is simpler in that SAR requires motion compensation on the order of a wavelength. Planes are subject to all sorts of motion like turbulence whereas a satellite’s motion is often far more predictable. Closing the link budget can certainly be more challenging given longer ranges and size, weight, power, and cooling constraints for satellites though.
Reading that long article where the writer didn't have awareness that they should define that term reminds me of reading in grad school. Please define the main term in your articles, people! Geeze.
Consider a simple radar as capable of locating objects in 3D spherical coordinates- range, azimuth, and elevation. Radar is typically really good at measuring distance, but because of the physics of wavelengths and antennas, it needs a relatively huge aperture to achieve good angular resolution.
SAR technique create a virtual or synthetic antenna by imaging coherently over a path that draws out the synthetic aperture.
Of course, time continues as the the platform (satellite in this case) is moving. SAR then requires precise spatial/temporal awareness to combine the returns from the ground into a coherent image.
It is similar to long exposure in photography leading to seeing dim objects far away.
In both cases, if objects in the imaged scene are moving during the scanning period, they become blurred in the final result.
Radar is good at ranging. So you can locate things precisely in that dimension. It’s not great at cross-range though because a beam is typically wide. SAR gets around this by essentially using the flight path or orbit of a radar sensor to synthesize a longer antenna than you’ve got. You make some assumptions that can be violated to varying degrees that may degrade the image (you need coherence over the synthetic aperture, you assume things you’re imaging don’t move, etc.)
I was about to write this same thing. It is amazing to me how scientists/experts sometimes seem utterly incapable to level with an audience. I get that it's tough if there is a lot of ground to cover, but the quote you pulled is so telling. Those who are now convinced that the NSA are looking at them from space when they shower will probably gain absolutely nothing from this release.
Why does it seem that press releases and public statements like this tend to overstate things, to the point of fallacy, in order to make a definitive point?
If you took a SAR image of my home you would absolutely see how many vehicles are in my garage.
I don't think that you are interpreting these images correctly (but the screenshots do not indicate the viewing direction, so it's not your fault). The objects that you see "through the roof" may be in front of the building, not inside. Radar imaging is based on the return time of a chirp to the antenna. If two objects appear on the same pixel in the image it means that they are at the same distance to the antenna. For example, if the sensor is at a (common) elevation of 45 degrees, a 10m vertical wall will seem to cover the 10m of ground next to it, and any object there will be superimposed with the roof of the building.
EDIT: I have some professional experience with the interpretation of satellite SAR imagery.
Looking at the article, these images, a s your description of what happening. I get the impression that the imaging radar has no angular resolution, the only thing it captures is time of flight of the radar signal from chirp to return, and that this data is a very thin beam, 50 cm wide in this case.
So when the data is visualised, the perspective in the SAR image is actually 90deg offset from the imagining direction. This is done because the collected data only contains distance, rather than distance and angle (which you would get with LiDAR).
So places where building look transparent are caused by the fact that visualisation perspective is different to the imagining perspective. In an image that appears to have been taken with the camera south of a target, it was actually imaged from the north (this is a simplification). So you will always get overlapping data, because the imaging and apparent visualisation perspectives are different. And the reason for not aligning the perspectives is because there isn’t actually enough data to do that (no return or transmission angle is collected along side the distance data).
It’s important to note that I say “apparent” perspective. Because the imaging perspective doesn’t actually change, I think the perspective change is caused by our brains recognising shapes, and computing a perspective angle. But this angle is actually incorrect because the RADAR imaging process is completely different to how our eyes work.
For those still confused about this. Try thinking about how the shadows are being cast, they’re not from the sun, and likely the only thing admitting radar signals to “illuminate” objects is the satellite emitting the RADAR chirps, and also doing the imaging.
> So when the data is visualised, the perspective in the SAR image is actually 90deg
True. The extreme (and useless) case would be when the satellite is exactly in the vertical direction, then the whole image would collapse in a single point. The closer to the horizon, the best resolution you have (but then there are problems with occluding objects). As you say, there is a sort of "equivalence" between an optical image taken at an angle α from the vertical, and a radar image taken at an angle α-90.
> I get the impression that the imaging radar has no angular resolution,
True. All the antenna can do is to send a spherical wave and receive replicas of it. Conceptually, there is 0 angular resolution. In practice the beam is somewhat directed (to not waste energy), but the high resolution you see does not come from that.
That's the same conclusion I'm coming to. In the golf course image I think the imager is actually where the sun appears to be based on shadows and perspective. The 'evidence' I'm looking at that the imager can see through the roof may just be in open view of the imager and we're just seeing retroreflections from the corners of the window.
So i guess they take a lateral sweep of the image and let the return fill in the other axis of the image?
> So i guess they take a lateral sweep of the image and let the return fill in the other axis of the image?
Exactly. Notice also that there is no "sun", these images may be taken at night or through the clouds, as far as you know. The "shadows" that you see are parts inaccessible from the radar beam itself.
(Sorry to drag this on, really appreciate your input)
Yep, it's starting to click. My guess would be that this (or the 180 degree opposite) is the flight path of the imager in the golf photo, basically opposite the perceived perspective of the photo:
So there isn't a sun, but it kind of looks like it because of how the signals accumulate or get shadowed.
This would also explain why we could see artifacts from the curved glass that appear to be 'through' the roof. The imager actually has direct line of sight to them from that flight path, and it's the layover effect from the roof/soffit that makes it appear that we are seeing through the roof.
> Yep, it's starting to click. My guess would be that this (or the 180 degree opposite) is the flight path of the imager in the golf photo, basically opposite the perceived perspective of the photo
Yes! Typically SAR images are annotated with the direction of flight, otherwise they are very difficult to interpret (unless there are tress, like here).
You are right that, even if the image is acquired from the "upward" direction, it looks like a regular image taken from "below". But this correspondence is not exact when there are occlusions (typically the vertical walls of buildings). The side of the building that is visible is the one closest to the antenna, not the one closest to your imaginary point of view. Thus it looks as if it was transparent, because it is superimposed with the ground next to it.
They should open up a rate-limited retail API that lets us verify for ourselves what is visible in our life and what is not. Otherwise we have to outsource our thinking to others who may have a conflict of interest.
Many high value commercial applications of ultra high resolution SAR only involve small areas of land and irregular periods
Ultimately satellite providers don't have any more obligation to subsidise access to their services to people wishing to verify for themselves that the product doesn't contradict the laws of physics than 5G providers or microchip providers.
Complex systems produce behavior that cannot be derived from first principles, which is why it is important to study the actual behavior of those systems. I studied E&M for four semesters at college, and it is far from clear to me that "the laws of physics" foreclose the possibility of this system unearthing new personal information about my patterns of life.
You might want to question your own confidence once in a while. Unless you independently predicted rowhammer, spectre, and meltdown.
I have the local knowledge to tell you that there aren't parked vehicles in or near that part of the building. You can even see in aerials that it's that it's a lawn with walking paths and a prefab outbuilding. I think this SAR image may be from before the outbuilding was there, and historical imagery shows that it used to just be an extension of the lawn. I think the bright objects are a distorted reflection from elements of the building facade, perhaps. I don't know, I'm squinting at the SAR image having a hard time deciding if it depicts the outbuilding or not, and it perhaps says something about the limitations of SAR images that it's not clear to the untrained eye whether or not an entire building is there. I think it's possible the building is there and is just made of materials with very low reflectivity, there are sort of "ghost lines" that seem in the right place for its edges.
Now take a look at this map image, and go back and forth to see if you feel that the X i marked is a reasonable approximation of the photographer's location
Now take a look again at the SAR image and tell me if you can see any artifacts from the front of the building through the roof (hint, zoom in and look for the curved line of bright dots :) )
But now that I have a better understanding of the layover effect I think it might just be that the satellite is looking from where it looks like the sun is shining, and it's directly imaging that front window frame and brick wall. Need our SAR pro to comment :)
Interestingly it does seem to see right through the awning that's to the right of the roof.
There appears to be a couple tables that are hiden from Google's satellite view but clearly visible to the bottom right of the building in the ka band SAR image.
That's odd, but you can see in the SAR photo that there isn't even an outline of the awning, nor the golf carts under it, nor the columns that hold the awning up, but there are trees which aren't there in the photos from the ground.
While I can't find the image online earlier than 2010, there's an ieee research paper titled TanDEM-X for High-Resolution SAR Interferometry from 2007, so it certainly could predate the awning.
> Why does it seem that press releases and public statements like this tend to overstate things, to the point of fallacy, in order to make a definitive point?
While in my mind, things were better with Fairness Doctrine and Equal-time Rule restrictions, I think we could solve this in the US by greater tax-funded federal, state, and local government-funded media, where the government has no control over that media aside from that of regulation to promote fair and equal representation within that media.
Basically, put PBS, state, and local programming into a streamed source similar to Netflix/Hulu/Prime, produce content with quality surpassing other streaming services, and give the public air time both via meritocratic decision by randomly selected electorate as well as via lottery and availability. The BBC might be a good model to emulate also.
At that point, there would at least be an alternative to B.S. journalism and bad media behavior reinforced by viewership and advertising dollars.
> If you took a SAR image of my home you would absolutely see how many vehicles are in my garage.
Unless your garage was made of materials that are transparent to the frequency band of the radar or the collection geometry enabled the radar to image through a door or large windows, this is not true.
Look into the physics of how this technology works. If you do that, and take the time to grok it, you'll have the information to understand that this type of radar cannot penetrate any of the materials that buildings are made.
SAR is a technique for building up a two dimensional image by rasterizing one dimensional signal reflection data. Any type of radiant energy would presumably support SAR techniques, from sonar to x rays, although the 'R' in SAR presumably limits it to radio frequency range (~30Hz-300GHz). Some of which, as I mentioned originally, absolutely would see through buildings.
My point is that the headline and article are trying to make the case that 'SAR can't see through buildings', when it should be 'Why it looks like our satellites are seeing through buildings' because that is all that is explained.
Yes. And actually I wouldnt be surprised if their satellites actually can see through buildings (because the frequency band they use should allow that). Maybe just need to adjust the color-scale (or switch to logarithmic if it is not). I mean 10GHz SAR is capable: https://www.kurzweilai.net/seeing-through-walls-in-real-time
(of course SNR might be too low for satellite, but this is nowhere written)
There's that old video of a police helicopter tracking someone who shined (shone?) ^W^W pointed a green laser pointer at them, even through buildings. Was that SAR?
Exactly. "SAR" is a very broad term encompassing various ways to make an antenna seem larger than it is. That's about all you can say about it, generically speaking.
Impulse radar with range gating substitutes time resolution for spatial resolution, and has been used to 'look' through walls with varying degrees of success. I think you can even buy a studfinder that uses similar principles.
Also, when you catch yourself writing things like this:
One of our radar scientists accurately
described the phenomenon (to reporters,
presumably): It helps to think of it
just as your brain’s interpretation of
a two dimensional representation of the
coherent sum of backscatter responses
from electromagnetic waves.
.... it's probably time to hire some marketing folks.
It helps to think of a monad as just a monoid in the category of endofunctors, with product × replaced by composition of endofunctors and unit set by the identity endofunctor
That has a range of 20 m and produces an image of an amorphous blob out of a stationary target. They also don't give the frequency or the power used. Doing this with a 100 kg satellite from 525 km away would be slightly more difficult.
I don’t understand this at all. Is there a side-by-side comparison of radar and visible wavelength somewhere? Or maybe a version of the geometric diagrams with example objects and the imagery they’d produce?
EDIT: Ok, after Googling this a little I think I get it. The skyscrapers are upside-down. (I think?) The radar is measuring slant-range distance, and due to the viewing angle, the tops of the skyscrapers are closer to the radar than the bases. So the tops of the buildings are closer to the bottom of the image.
Since the skyscrapers are in the “wrong” place in the image, they get blended with ground features that are in the “right” place.
You have the right idea. It can also help to think of it as a projection problem.
The image is displayed as if it were taken from a camera directly above the ground, but the radar is actually located at some other location. In order to create the image an assumption is made that everything is located at the same height. So when something like a skyscraper is actually hundreds of meters above the ground, the radar detections from that object are projected to the wrong location in the image. The term to look up is "foreshortening".
This is similar to (but not exactly) what happens when aerial photography is used to make maps. Building in Google Maps get flattened and the roofs are translated to someplace other than where they should be.
What also helps is to look for "shadows" in the SAR imagery. The shadow tells you where the radar is located. From that you can intuit what side of objects the radar is actually seeing.
A great example used in textbooks is this image of the Washington Monument. You can tell from the shadow that the radar is located to the north^. So even though it looks like you're seeing the south side of the monument, the image is actually showing the north side. The north face of the monument has been projected onto the ground in the direction of the radar.
http://image.slidesharecdn.com/radar-2009-a18-synthetic-aper...
So looking at the Tokyo image again. The tree shadows are south of the trees, so the radar is located to the north. The tops of the skyscrapers are closer to the radar than the bottoms, so the tops of the skyscrapers will be shifted to the north towards the radar. However, because the radar is located to the north, what we see is actually the north side of the skyscrapers. The false perspective makes it seem like we should be seeing the south side of the skyscrapers, but that's an illusion.
You can build your own SAR system pretty easily, but of course the really cool stuff results from being able to fly a plane or in this case have some satellites. The PDF for Experiment 3 here https://ocw.mit.edu/resources/res-ll-003-build-a-small-radar... shows an example of using a weak 2.4 GHz system on the ground taking snapshots every two inches for 10 feet or so, facing down a road towards a warehouse. A visualization algorithm is overlayed on top of a satellite image to show that the system can at least detect front surfaces. The professor has also built other hobbyist high-frequency systems before, including demonstrating through-wall capabilities for some of them. This PDF (others on his site) has some good examples of what the environment looks like with a camera, and what you get when you process some SAR data with matlab into a 2D image: http://glcharvat.com/website%20pdfs/Charvat_MIT_Haystack_DIY...
That a much better explanation than the OP article.
It's like an isometric view of a video game. (But not exactly, since the angles are different)
A tall thing and a lower thing nearby it are protected onto the same spot. (Usually in a video game the tall thing blocks the behind thing, instead of blending with it.)
Blending the two images makes the front-looking thing look transparent, when in really you are seeing around one object (since the observation angle is not the same as the imaginary viewing angle of the image) and then drawing both objects in the same pixel.
Does anyone have a white paper about corner reflectors and X-band SAR?
For what it's worth, the state very likely needs a warrant to use technologies capable of looking inside structures [1]. Not that the law applies to them, but hypothetically.
Dunno, but if you look at some of the image gallery photos here, there's a very bright specular reflection from a number of vehicles. Police radar is right in this frequency range, so it would make sense of they have a retroreflective effect built into the paint on the plate.
Do I believe that. Not really. Frequency is around 9,7GHz which is the upper limit of ground and wall penetrating radar systems. Of course you cannot look through metal or conductive structures, but why not thin plastic tents or thin walls? For this we need to know the sensors sensitivity. Of course imaging gets a bit messier, since the imageing algorithm is based on the assumption that you only get an entry reflex (so basically it is 2D or lets call it 2,5D imaging). For full 3D you also need to scan/move orthogonal to the first satelite direction.
There is a lot marketing bullshit in the Capella articles.
For example, the Singapore image of buildings and other detections by the SAR is clearly overlaid on an optical image with trees and bushes and respective shadows. Trees and bushes are invisible to X-band (10GHz) SAR. It has to be an optical image underlaying the radar data.
Look at it. The detections by SAR are bright white. Most of the image is grey-scale showing background items.
It is not advertised as such. As an interpretable image, perhaps this is an improvement over notoriously hard-to-understand monochrome SAR imagery. BUT! It is not described as such. Hence my subjective evaluation as "bullshit".
I know better than to worry about seeing through buildings. What I want to do is to explore the data, and see how the land rises and falls with the tides, the rain levels, temperature, etc.
There have to be all sorts of new and beautiful images and movies to be made from this data.
I look forward to getting my hands on it eventually.
Does anyone have any links to full-resoluton examples from Capella?
All of the ones I have seen so far appear to be resized for use in their blog/press releases ... or are those images as high-res and as sharp as they get?
Wow that was a terrible explanation. As far as I can figure out, it is only 1D SAR - that is the X axis is spatial, but the Y axis is actually time. If you fire radio waves at the ground at an angle, and the ground is flat, then you can easily map time to position. But buildings and hills mess up that mapping. I always assumed SAR used 2D arrays.
Again, really terrible explanation. It feels like they really didn't want to explain it at all.
Anecdote you may or may not find relevant: SAR maps are used regularly by fighter aircraft (And have been for decades) to generate high-precision coordinates (Absolute or relative) that can be used to target weapons. They do this from relatively far away. Most 4th gen+ fighter radars are capable of doing this in hardware and software.
The layover effect at the center of this article is due to the map being of skyscrapers.
I've heard of c band sar, and x band sar. Is x band higher resolution? I understand it has properties where x sees through fewer things, but is x fundamentally better resolution or something? If not wouldn't higher wavelength be better for seeing through and resolving objects?
Besides seeing through clouds and at night, you can often correlate images taken at different times and detect tiny movements (of fractions of a millimeter). This technique is called "interferometry". You can observe minute movements of large swaths of terrain; this is impossible with optical images.
As a relevant commercial example, you can see patterns of subsidence which indicate a sinkhole is about to form near your mine. You're probably only looking at a relatively small area but it's unlikely to be evident from optical images.
As an indication of how sensitive it is, you can use SAR interferometry to see where Crossrail tunnels have been bored under London, despite the elevation change of the buildings and roads on top being too small to have had any impact on their structural integrity. The caveat is that this level of sensitivity you get when looking at smooth, undisturbed surfaces like the roofs of buildings isn't matched when you're looking at fields with growth and soil movement
> looking at smooth, undisturbed surfaces like the roofs of buildings
a flat roof (with fine enough texture) will be reflective thus invisible. What you see are precisely tiny "corner" reflectors formed by large enough gravel, rocks and building structures. On a mountainous area, as long as you have some rocks and gravel here and there, you are still able to find correlations--and thus interferences. Of course, if all you have is wet mud and vegetation there's nothing to do, but this is not often the case. In a region of special interest like near a mine, people often install ad-hoc corner reflectors at calibrated positions.
It can see through buildings, it can do some pretty precise depth estimates,...
It’s not a replacement for visual range (or infrared for that matter) but an additional sensor. It’s usually used to create things like elevation maps. So all those altitude measurements that you see on Google Earth might be at least partially SAR.
While it is true that some implementations of SAR can image through walls at short distances (tens of meters), SAR is a very broad term and those systems are very different from the imaging systems in the article.
The blow-up picture doesn’t make sense either - hardly any difference in zoom and the little circle contains only a small part of the image in the big circle.
Do they understand how these diagrams are supposed to work?
It will be scary the day they can track every human movement at all times. But I think it is coming. One side and I think well we would know what happened to Michael Dunahee, a child that went missing when I was a kid, being able to reverse the video of what happened. But also it wasn’t that long ago that Canada punished a man for being gay so this is a very dangerous technology if it can track humans. Privacy is critical to a free society.
Sure a phone can do a lot of that but someone who plans on burying a body may consider tracking and leave it at home. But if a satellite was tracking everyone even in heavy weather conditions at all times and all they had to do was reverse the camera then there won't be a lot of crimes we can't solve. At first probably just used to catch high profile killers because they won't want the knowledge out that they can track anyone. So it will be used to track down high profile targets who won't have much of a trial anyways. Then other countries will utilize the same technology and track down any dissidents. After some time and many other countries are using it and tracking becomes the normal and it is more accepted that you will be tracked then every day crimes can be persecuted. No more slow stopping at stop signs you better come to a full stop or an automated ticket can be sent to you. Pettie crimes like vandalism will be tracked down and people arrested. Of course then there will be private entities that launch their own satellites and begin tracking people. Want to know where your spouse goes, no problem for a monthly fee. Track your teen sure. Safety monitoring of old grandma would be a nice one now she won't wander. What neighborhood is a person driving in from a nice neighborhood well I will push the penthouse suit and extra amenities since they can afford it. Those are just a few of the ideas I thought of.
[0] https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=wifi...
[1] http://rfpose.csail.mit.edu/
[2] https://cse.buffalo.edu/~lusu/papers/MobiCom2020.pdf