Huh, interesting. In the ML and CS theory literature I’ve only ever seen “adversarial example” used to mean an input explicitly designed to produce a specific unexpected output, not just a worst-case output that isn’t what you want.
Do you have an example use of this sense that I could look at to update myself?
Do you have an example use of this sense that I could look at to update myself?