Judging by the video they achieve 83% success rate with a pool of 8 people in what looks like perfect conditions (only one person, no perturbation of any kind etc...). The video also states, rather suspiciously IMO, that "This scene on the other side is shown solely for presentation and was not used for identification". Why not show the actual footage of the experiment?
>The lab has tested their new technology on 1,488 WiFi-video pairs, drawn from a pool of eight people, and in three different behind-wall areas, and achieved an overall accuracy of 84% in correctly identifying the person behind the wall.
What does it mean exactly? They only need one short video and one short wifi capture to get 84% success rate? That's what the video seems to imply but I find that very hard to believe. Or maybe it's just because it's fairly easy to distinguish among 8 people (especially if they have significantly different body types) and it won't work quite as well at large. I can identify my girlfriend's footsteps in the staircase with remarkable accuracy but I can guarantee you that it won't scale to the general population.
Maybe it works better than I give it credit for but they need to bring up better evidence IMO.
>What does it mean exactly? They only need one short video and one short wifi capture to get 84% success rate?
Key aspects upon this are that it works by matching up their style of walking (gate), so the person needs to be moving not only in the wifi data capture, but also the video they are matching with.
They use the video to extrapolate a 3D mesh model of that person walking. Then model a wifi signal upon that model and produce a signature for that person that they can then use to match up with what the wifi signal is picking up.
Whilst measuring some bodies way of walking has been done many times before, this add's another way of doing that - but without the need for a camera feed.
So with this approach - no they can't produce a video image of the scene behind the wall from the wifi signal(s).
Remember - individuals have many unique traits, many of which we havn't tapped into yet, though as far as walking styles go - that has been done for years, this just creates another way of measuring those and matching up with other forms of data, video in this case. Other things that could be explioted this way would be for example sound - a contact mic array would allow you to (thru a wall) measure a person's heartbeat to some degree and match that up to an individual. Privacy wise, this may well traction well compared to other forms of surveillance.
> "This scene on the other side is shown solely for presentation and was not used for identification". Why not show the actual footage of the experiment?
My understanding is that the video on the left (the one showing the other side of the wall) is just here to illustrate what's the wifi setup senses and was not used for identification as the whole point is to identify without (optically) seeing what's on the other side.
> For instance, consider a scenario in which law enforcement has a video footage of a robbery. They suspect that the robber is hiding inside a house. Can a pair of WiFi transceivers outside the house determine if the person inside the house is the same as the one in the robbery video? Questions such as this have motivated this new technology.
Goodbye 4th Amendment.
Would be cool if these intelligent folks but their efforts towards improving privacy.
I don’t see any reason the existence of spy tech would damage the 4th amendment. We already have spy tech (way better spy tech than this, btw), and the amendment still works to prevent (at least some) unreasonable search & seizure.
This is a neat application of WiFi signals, but the accuracy is nowhere near good enough for this to reliably ID random people.
I’m way more worried about online corporate surveillance that actually can ID me reliably with 100% accuracy and get a lot more info about me besides where I am, than I am of government surveillance of my walking gait through walls.
Not to mention the posited scenario has nothing to do with the 4th amendment. If you reasonably believe a suspect is hiding in a house such that surveillance is a decent approach, you could go kick down the door. Choosing to verify first with wifi is a gentler approach.
"Since the police did not have a warrant when they used the device, which was not commonly available to the public, the search was presumptively unreasonable"
As tech marches on, "not commonly available to the public" becomes a smaller space.
Separately, I'd hazard a guess that they still use FLIR in an indiscriminate way, and just conjure up a little parallel construction when needed.
If I'm not mistaken the court crafted their opinion to apply "at least" to technology "not in general public use", and was somewhat ambiguous as to more common devices.
I share your concern for what will become of privacy as exotic new surveillance techniques become widespread. The legal treatment of expectation of privacy seems tied to whatever is the societal norm of the age, and that mark has shifted dramatically in my lifetime.
I don't get this attitude. If something is technologically/scientifically feasible, it will get invented, and it will get perfected and it will get used. Military drones, human cloning, designer babies, you name it. Taking the high road just means someone else will do it instead.
No. It is a race to develop both sides, the detection and the attack against that detection. The "other side" will develop similar on their side, and the competition is what advances everyone. Practically every thing technological we have is due to war/defense research.
While I see your point, I have a hard time imagining designer babies will be socially acceptable in the west within this lifetime. I think it’s reasonable to believe the usage and research of technology is bounded by societal norms to a significant extent. When it comes to adversarial technology however—and designer babies are arguably not in that group—I can agree with that it’s imperative to understand the tech so that it can be defended against.
I know that empirically, in terms of what society thinks, you are probably correct, but what is wrong with designer babies? Isn't it the only way we can continue the further improving and developing of the species (instead of making it worse by artifically keeping people with genetic diseases alive and reproducing), at the same time also avoiding the more horrendous parts of natural evolution and selection like genocide, race wars or idiocracy-styled trends of selecting by the dumb metric of sheer number of kids?
> Taking the high road just means someone else will do it instead.
Someone will do it, but someone will also do the counter-thing to it. In this example someone might research how to prevent people from being identified through walls, and I would hope that those people are the more skilled ones.
True, maybe someone will make these things, but that doesn’t mean one has to approve of it, nor be the one that builds it.
Bad things still happen even they’re outlawed, but that doesn’t mean we just throw up our hands and proclaim that there’s nothing anyone can do about anything, and dismantle the entire legal system.
Same could be argued about much worse ideas, ideas where people historically shied away and tried to control at least a bit of the context in which these ideas fell.
The whole developement of nuclear weapons is a story of the when, of the how fast and of the should I and would have looked certainly different if it’s protagonists all had based their decisions on the ethical rationalisation you proposed here. Who knows, maybe even the Germans would have gotten nuclear weapons?
The hard question obviously is: where do you draw the border? If you invented the paperclip and it was beeing used in the holocaust’s bureaucracy blaming you would be a bit far fetched. If you invented the gas showers of the KZs whose only purpose it was to mass-kill prisoners, not feeling ethically responsible would be totally mad. In between there you have got Werner von Braun with his V2 rocket, which isn’t exactly easy to judge. He enabled space travel, yet the bombs killed many british civilians quasi randomly and out of the blue and the number of slave workers killed during construction was even higher than the number the rocket killed.
Inbetween these three examples there are infinite shades of ethical dilemma while your argument paints this as a binary question about whether something could or should exist or not — and if the answer to the could is True, who cares about the answer to the should, right?
I propose something else. If you value your own intellect you should abstain from ideas that could be, but shouldn’t and rather attack things that could be and should be or maybe even things that should be, but couldn’t by the thinking of today.
We engineers, scientists, thinkers and artists are giving birth to things that become reality, but it is our indivisual decision if we do:
You will rarely find a writer telling you he had to write that dehumanizing dangerous text, because if he wouldn’t have done it somebody else would have. We can stray farther from this even and paint an example where a old lady collapses at a bus stop. Now you could have someone robbing her as she lays there (”If I wouldn’t have done it someone else would’ve”). Or you could have somebody going for first aid and calling the ambulance (”If I wouldn’t have done it someone else would’ve”).
The statement “someone else would’ve done it” is not really a ethical reason to do anything. The question you should ask yourself is: in what kind of society do I want to live and do my actions contribute to it and if so at what cost? What if everybody acted like I did?
> The question you should ask yourself is: in what kind of society do I want to live and do my actions contribute to it and if so at what cost? What if everybody acted like I did?
But then of course this is the wrong question to ask. This is nothing more but a complicated version of a multi-party, repeated Prisoner's Dilemma problem. Except that you don't know your negative payoff when an opponent chooses to defect, but it may well be minus infinity (you get holocausted or nuked into paleolith). And then defection by any single party gets you the negative payoff. So if you only ask the question you suggest and ignore the rest of the options, you are doomed.
I'm afraid that the world's politicians (most of them at least) are completely sane, and all this "race to the bottom" stuff going on is just the result of the game theory & life being what it is.
This is true if you think in an egoistical game theory mode, where all the other can be is an enemy, where only one party can win and where the idea that you win automatically when the other loses is accepted. In reality both can win or both can lose as well (and they do so regularily). So if both sides can predict a race to the bottom wouldn’t the better thing be an approach that tries to disolve the binary situation (which is btw a very american thing in itself because political compromise is far more common in nations without two-party systems).
I think context is key, and we shouldn’t theorize conflicts in everyday life in the same way as all out nuclear war or other existential situations. Maybe that is just the naive idea of a guy who benefits from the wealth and peace produced from a Europe which for the first time in centuries doean’t feel the need to kill each other for (in hindsight) more or less meaningless reasons.
In the sense that it would give the wealthy an unreasonable advantage to stay wealthy forever and mean an end to social mobility, which would in turn lead to a feudal social class system, yes.
The thing is though, money does that already if your society doesn't have strong inheritance taxation, so I'm not sure it's really such a problem.
You kinda have to develop the privacy-invasive technology to even know what to be defending against – especially since some of those pursuing privacy-invasive tech will be doing so in secret.
So publicly sharing this research is an effort towards improving privacy, by letting other researchers & the public know what the risks are.
So basically gait analysis using WiFi power levels. At present requires making the user walk through a specific choke point to measure the gait, then can compare any other captured video to compare. Pretty neat!
When you put it that way, it seems actually like it would be hard and messy to deploy in any detail.
The thing about a lot of machine learning tools is they can get better and better over a period of a decade and yet still remain undeployable - see self-driving cars.
I mean, the hypothetical suspect situation would like be extremely messy and getting a clear scan to deploy this seems implausible. General purpose surveillance would probably just use cameras intended for the purpose.
Who knew the ministry of silly walks[1] would be a privacy measure in the future?
In all seriousness, the confidence interval of this is really low - 83% among a tiny pool of individuals. It's really interesting to be sure, but the tiny amount of confidence would make it extremely unlikely to be used in the context of justice/law enforcement (then again, maybe not [2]).
I'm giving serious thought to throwing down money on copper paint to make a faraday cage undercoat next time I feel like a different color scheme. People today are subject to surveillance capabilities that George Orwell and the Stasi could not even have dreamt of.
Key word here bring "capabilities". Stasi did a lot worse with a lot less because the authoritarian and intolerant ideology that brought Stasi about was in itself a lot worse.
If you're so worried about surveillance that you're willing to turn your house into a Faraday cage, why would you have an active cell phone in your house (or provide a microcell that "they" can use for their own listening devices)?
For those wondering the bottom line, it is interesting preliminary research on a small pool and I expect will be quite interesting as it progresses.
"The lab has tested their new technology on 1,488 WiFi-video pairs, drawn from a pool of eight people, and in three different behind-wall areas, and achieved an overall accuracy of 84% in correctly identifying the person behind the wall."
I think they're referencing that 1488 is used to represent neo-nazi beliefs. I don't know if they're trying to imply that that people that did the studies are neo-nazis, and selected that number of configurations and trials as an in joke.
It certainly seems like an odd coincidence. Perhaps I'm jaundiced from years of monitoring the far right, but they do like such jokes and it's not like 62 is a round or otherwise aesthetically interesting number.
And you think some researchers at UC Santa Barbara, under a professor named "Yasamin Mostofi", are far-right and deliberately snuck in a white-supremacist reference? I have a feeling most of the researchers aren't even white... "Chitra Karanam", "Belal Korany", and "Herbert Cai" are the three Ph.D students. Well.
Of course, even if all people involved were white, and even if it was a deliberate reference, as long as there's nothing other than the number, no one should get in trouble for that. Under no circumstances should researchers be worrying about the numbers they come up with. If you're doing 420 trials at 69 degrees Fahrenheit, so be it.
Cf. this thread: "Six paediatric health‐care professionals were recruited to swallow a Lego head. Previous gastrointestinal surgery, inability to ingest foreign objects and aversion to searching through faecal matter were all exclusion criteria. Pre‐ingestion bowel habit was standardised by the Stool Hardness and Transit (SHAT) score. Participants ingested a Lego head, and the time taken for the object to be found in the participants stool was recorded. The primary outcome was the Found and Retrieved Time (FART) score." https://news.ycombinator.com/item?id=18519899
They don't extract a 3D mesh from the signal -- this doesn't literally "see through walls".
They use the 3D mesh to model how they think they signal would be modified by the person behind the wall, then they compare the received signal with their model to see if it's a match.
Couldn't this be amped up by having at least 3 sets of Tx and Rx and get locations of moving objects? That's the dream application for LEOs, not gait recognition for which you would seldomly have data.
Cool project, but it's hard for me to see applications here.
> Can a pair of WiFi transceivers outside the house determine if the person inside the house is the same as the one in the robbery video? Questions such as this have motivated this new technology
We have windows, tiny little cameras that go through walls, and we can ping cell-phones. Correct me if I'm wrong, but wouldn't all of those work better than this incredibly convoluted method?
In a practical sense, yes, there are innumerably easier methods for determining the presence of a a person within a building.
On a legal basis, however, in some states it's illegal to even look over a privacy fence without a search warrant. I imagine this approach is less accounted for by lawmakers of the past.
There's lots of applications -- how about a motion sensor to tell you when there's a person on the other side of your door -- and since it's modeled off of humans, it's immune to false signals from cars or animals.
And since it could use your existing home Wifi as the source signal thus the sensor would be receive-only, it could be very battery efficient so could last a long time on battery.
> We have windows, tiny little cameras that go through walls, and we can ping cell-phones. Correct me if I'm wrong, but wouldn't all of those work better than this incredibly convoluted method?
This is research. Currently it's unreliable (83% is super low) and convoluted, but with more research and investments, it is possible that they would be able to make it reliable and practical.
This is quite fascinating at first glance, but if this does become a viable method for law enforcement to use, the worry is that a 82% or 83% or 89% confidence score still gives in to a huge doubt. Actions taken on this basis could be uncalled for. If the justice system gets accustomed to it and relies on such confidence levels without looking for additional corroborating evidence, it would be disastrous.
That's basically mesh opaque to visible wavelengths.
The effect would be that you can't observe limb or limb part movement inside the skirt, but only how they move and deform the skirt volume. I think that's still enough, given that gait recognition works on people in burkas or other well covering garments.
If this is possible using only a pair of WiFi transmitters and decibel information, what will be possible with U1/Ultra-wideband spectrum use coming to iPhones?
Can't wait this technology to make it into the hands of the US military where it will be used to justify sending a few missiles their way. Thank you for your service.
>The lab has tested their new technology on 1,488 WiFi-video pairs, drawn from a pool of eight people, and in three different behind-wall areas, and achieved an overall accuracy of 84% in correctly identifying the person behind the wall.
What does it mean exactly? They only need one short video and one short wifi capture to get 84% success rate? That's what the video seems to imply but I find that very hard to believe. Or maybe it's just because it's fairly easy to distinguish among 8 people (especially if they have significantly different body types) and it won't work quite as well at large. I can identify my girlfriend's footsteps in the staircase with remarkable accuracy but I can guarantee you that it won't scale to the general population.
Maybe it works better than I give it credit for but they need to bring up better evidence IMO.