Hacker News new | past | comments | ask | show | jobs | submit login
Testing Robustness Against Unforeseen Adversaries (openai.com)
18 points by QuitterStrip on Aug 23, 2019 | hide | past | favorite | 1 comment



For better or worse, this helps establish neural networks that are not (or less?) vulnerable to deception via e.g. fake eyeglasses [ LINK: https://dl.acm.org/citation.cfm?id=2978392 ] and adversarial stickers [ LINK: https://arxiv.org/abs/1712.09665 ].

From the post:

"We’ve developed a method to assess whether a neural network classifier can reliably defend against adversarial attacks not seen during training [such as Elastic, Fog, Gabor, and Snow]. Our method yields a new metric, UAR (Unforeseen Attack Robustness), which evaluates the robustness of a single model against an unanticipated attack, and highlights the need to measure performance across a more diverse range of unforeseen attacks."

Paper: https://arxiv.org/abs/1908.08016 Code: https://github.com/ddkang/advex-uar

If anyone w/relevant expertise is willing to share thoughts on this: please do.

For further reading: this just in -> "Robust Machine Learning Algorithms and Systems for Detection and Mitigation of Adversarial Attacks and Anomalies: Proceedings of a Workshop" (2019) an open-access publication from the National Academies of Sciences, Engineering, and Medicine: https://doi.org/10.17226/25534.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: