Hacker News new | past | comments | ask | show | jobs | submit login

I don't want to tell people what to take away from it, but my own view is that the contribution of the paper is the framing of the problem and a preliminary demonstration that you can achieve the goal of defeating a NN via an adversarial training formulation. In a lot of ways, I view our work as creating a challenge problem for the AI & crypto communities. It shows an intriguing possibility that is far from being practical, and itself represents a stepping stone in work on, e.g., training DNNs to learn to communicate.

Can this be extended to generate cryptanalysis-resistant schemes? Can we create a formulation that can learn to use discrete operators such as modular arithmetic? (The latter reflects one of my personal complaints about the state of DNNs -- it's very hard to have discrete "blocks" of functionality if that functionality isn't differentiable). Can we identify mechanisms that can train networks such as these more robustly? (It fails to train roughly half of the time; this is a fairly common problem in adversarial training of DNNs today.)




NNs/ANNs/DNNs/xNNs have already[0] been framed as easily fooled.

[0] https://arxiv.org/pdf/1412.1897.pdf




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: