Hacker News new | past | comments | ask | show | jobs | submit login
Hacking Neural Networks: A Short Introduction (github.com/kayzaks)
178 points by jonbaer on Nov 17, 2019 | hide | past | favorite | 7 comments



If you like this kind of thing, I wrote a little document summarizing the security and privacy weaknesses of neural networks during grad school a couple years ago: https://matt.life/papers/security_privacy_neural_networks.pd...

I _think_ most of the content in there is still relevant/valid, but the field moves so fast (with dozens of papers written every day on average, it's actually one of the main reasons I switched my emphasis from ML to Internet security)...


Nice read, thanks.

Yeah, I’ve always had a weird feeling about whether I should have pivoted to deep fakes, cryptonets, or ML for security applications since I was a few months into PhD.

Think I’ve found me a good non-image niche though (thank Christ for no free lunch theorem.)


A very cool survey! I'll make sure to add a reference in the next commit


Super cool, thanks!


I think this sort of thing is handy for reinforcing to people that neural nets are still just code, and should be treated as such. That means testing and reliability standards built into deployment. The last exercise in particular rams it home really well - you don't need to do adversarial attacks by means of inferring gradients and all that jazz, when you can just spot a buffer overflow. This is a great repo, which I'm (ML Lead) going to make my co-workers read.


It has come. Guess the trick is do you understand your own neural network so that you can hack it and defend it as well.


NeuralOverflow... sweet hacker name.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: