"We’ve developed a method to assess whether a neural network classifier can reliably defend against adversarial attacks not seen during training [such as Elastic, Fog, Gabor, and Snow]. Our method yields a new metric, UAR (Unforeseen Attack Robustness), which evaluates the robustness of a single model against an unanticipated attack, and highlights the need to measure performance across a more diverse range of unforeseen attacks."
If anyone w/relevant expertise is willing to share thoughts on this: please do.
For further reading: this just in -> "Robust Machine Learning Algorithms and Systems for Detection and Mitigation of Adversarial Attacks and Anomalies: Proceedings of a Workshop" (2019) an open-access publication from the National Academies of Sciences, Engineering, and Medicine: https://doi.org/10.17226/25534.
From the post:
"We’ve developed a method to assess whether a neural network classifier can reliably defend against adversarial attacks not seen during training [such as Elastic, Fog, Gabor, and Snow]. Our method yields a new metric, UAR (Unforeseen Attack Robustness), which evaluates the robustness of a single model against an unanticipated attack, and highlights the need to measure performance across a more diverse range of unforeseen attacks."
Paper: https://arxiv.org/abs/1908.08016 Code: https://github.com/ddkang/advex-uar
If anyone w/relevant expertise is willing to share thoughts on this: please do.
For further reading: this just in -> "Robust Machine Learning Algorithms and Systems for Detection and Mitigation of Adversarial Attacks and Anomalies: Proceedings of a Workshop" (2019) an open-access publication from the National Academies of Sciences, Engineering, and Medicine: https://doi.org/10.17226/25534.