Hacker News new | past | comments | ask | show | jobs | submit login
Fooling surveillance cameras: adversarial patches to attack person detection (arxiv.org)
148 points by hardmaru on April 22, 2019 | hide | past | favorite | 23 comments



Example video from the pdf: https://www.youtube.com/watch?v=MIbFvK2S9g8 Looks like it would work well on a tshirt


Looking at the photos in the paper, that's the first ML-camouflage pattern that actually seems practical. I could see someone wearing a shirt with this "adversarial patch" looking totally unremarkable in public.

Other types of ML-camouflage, like https://cvdazzle.com/, make the wearer stand out far too much to actually be useful as anything but an art project.


There are other methods of occlusion and obfuscation to fool FR, varying in range of application and corresponding with the level of paranoia. The adversarial patch seems to fall somewhere along the lines of clothing that have images of faces or eyes on them to confuse FR until the algorithms eventually get better at recognising and countering them.

I don't what became of the 'infra-red baseball cap' discussed in the article. In my opinion, it not only seems like a different approach, but also feels innocuous, compared to some of the other methods.

https://www.wired.co.uk/article/avoid-facial-recognition-sof...


I wonder though, seems like the fact that they chose a general purpose object classification implies that face detection still works.


The "average" surveillance camera often does not have sufficient pixels-on-target/pixels-per-foot to do face detection for the majority of persons in the scene. The trend towards higher resolution cameras (eg: from an average resolution of ~3MP today towards 4K resolution) will of course help with face detection somewhat, but you also have angle of view concerns, and the fact that people don't always walk towards the camera for a good face shot.

Various companies have been implementing appearance search functionality that lets you search for similar appearances across multiple cameras to find the scene/image where you got a facial shot of a person of interest. Preventing the majority of cameras from classifying someone as a "person" object would significantly disrupt that workflow.


Perhaps something like AdaBoost could find matches using a weak classifier on hundreds of low-resolution images (a few seconds of video) at different angles, assuming you can put together a similar data set for training... basically trading spatial information content for temporal.


The fact that the patch has to be placed around the waist seems to indicate that the shirt/pants transition is important to the algorithm (makes sense considering that training data full of normally dressed people is almost certainly going to have a lot of pictures of people that are dressed in a shirt that contrasts their pants). I wonder how if denim + denim, a coverall or a dress makes person detection harder.


The demonstrating video was hilarious.

You cannot stop the camera to be there or aiming at you, so realistically the best option is to rely on a timeless principle: modifying input information will lead to changes of its output result, since the second element will always depend on the first.

I can see this is going to be another cat and mouse game in the near future. People are going to constantly come up with new creative ideas to circumvent around machine learning since it can only accurately identify data which it has already seen.


Colorful eyewear can be used too.

https://www.cs.cmu.edu/~sbhagava/papers/face-rec-ccs16.pdf

Looks realistic, plausible and can be 3D printed (authors of the paper even provide the design sample).


This triggered interest in the FR research community, and then died as algorithms were tweaked to defeat the attack. Now, this is used as a filter to identify FR systems not keeping up.


This is not the first attempt at this. Anti-camera makeup has been a thing for a while. Painting a square on your nose or a flesh-colored eyepatch is often enough to fool facial rec systems looking to measure distances between features. Standing on a skateboard or moonwalking can sometimes trick a system into calling you a non-person. Putting a box over your head darlek-style works too.

None of these defeat old fashioned motion detection + followup by human eyes. Security guards arent going away anytime soon.


Of course security guards aren't going anywhere anytime soon. Rather the new technology will make finding and acting on incidents quicker and more efficient.

I can't see it being too difficult to identify tactics being used and mark them as suspicious.


> I can't see it being too difficult to identify tactics being used and mark them as suspicious.

I'm sure you're right. But remember what every security measure is trying to achieve -- it's not actually making you immune to attack (that's impossible), but rather to increase the cost of the attack.

This sort of thing would accomplish that by requiring human involvement.


William Gibson had something like this in one of his later books. Spook Country maybe? As I recall it was simply called the Ugly Shirt.



What about adversarial patches to make the camera ID a different targeted person? Could be used to frame someone.


CV Dazzle, some earlier work in the same vein: https://ahprojects.com/cvdazzle/


Eyeglass frames with eyes on them is a wonderful way to default FR.


You can purchase a t-shirt from cloakwear.co


A Burqa[1] sounds like a safer bet (if you aren't carrying a cellphone).

1. https://duckduckgo.com/?q=burqa&t=ffab&iar=images&iax=images...


Half the IP cams out there are on a WiFi network that isn’t a dedicated vlan so an attacker could just ddos them or do a broadcast feedback loop to take them offline


A method that doesn't put you in legal jeopardy seems more desirable.


I am merely stating a vulnerability not recommending an action




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: