Hacker News new | past | comments | ask | show | jobs | submit login

Wow. I badly want to try this out with music, but I've taken little more than baby steps with neural networks in the past: am I stuck waiting for someone else to reimplement the stuff in the paper?

IIRC someone published an OSS implementation of the deep dreaming image synthesis paper fairly quickly...




Re-implementation will be hard, several people (including me) have been working on related architectures, but they have a few extra tricks in WaveNet that seem to make all the difference, on top of what I assume is "monster scale training, tons of data".

The core ideas from this can be seen in PixelRNN and PixelCNN, and there are discussions and implementations for the basic concepts of those out there [0][1]. Not to mention the fact that conditioning is very interesting / tricky in this model, at least as I read it. I am sure there are many ways to do it wrong, and getting it right is crucial to having high quality results in conditional synthesis.

[0] https://github.com/tensorflow/magenta/blob/master/magenta/re...

[1] https://github.com/igul222/pixel_rnn/blob/master/pixel_rnn.p...


Is there any usable example code out there I can play with? I don't care if it sounds noisy and weird, it's all grist for the sampler anyway.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: