Hacker News new | past | comments | ask | show | jobs | submit login

This article is not good, I encourage reading the paper its based on instead: https://arxiv.org/pdf/1905.10615.pdf

“In some ways, adversarial policies are more worrying than attacks on supervised learning models, because reinforcement learning policies govern an AI’s overall behavior.If a driverless car misclassifies input from its camera, it could fall back on other sensors, for example.” TIL fail-safe components are 1) ubiquitous 2) work 3) only an option for supervised learning components.

“A supervised learning model, trained to classify images, say, is tested on a different data set from the one it was trained on to ensure that it has not simply memorized a particular bunch of images. But with reinforcement learning, models are typically trained and tested in the same environment.” First, a RL environment is not equivalent to a supervised learning data set. Second, the train validate test paradigm is not thrown out in RL research, its why OpenAI put their Starcraft agent on public ladders.

“The good news is that adversarial policies may be easier to defend against than other adversarial attacks.” This sentence refers to Graves et al. adversarially training their agents. Adversarial training is, of course, also conducted frequently in supervised learning.




Ok, we've changed to the paper from https://www.technologyreview.com/s/615299/reinforcement-lear.... Thanks!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: