I _think_ most of the content in there is still relevant/valid, but the field moves so fast (with dozens of papers written every day on average, it's actually one of the main reasons I switched my emphasis from ML to Internet security)...
Yeah, I’ve always had a weird feeling about whether I should have pivoted to deep fakes, cryptonets, or ML for security applications since I was a few months into PhD.
Think I’ve found me a good non-image niche though (thank Christ for no free lunch theorem.)
I think this sort of thing is handy for reinforcing to people that neural nets are still just code, and should be treated as such. That means testing and reliability standards built into deployment. The last exercise in particular rams it home really well - you don't need to do adversarial attacks by means of inferring gradients and all that jazz, when you can just spot a buffer overflow. This is a great repo, which I'm (ML Lead) going to make my co-workers read.
I _think_ most of the content in there is still relevant/valid, but the field moves so fast (with dozens of papers written every day on average, it's actually one of the main reasons I switched my emphasis from ML to Internet security)...