Hacker News new | past | comments | ask | show | jobs | submit login

Hi, yup that's true we keep the filter order fixed. For the experiments in the paper, the time-varying coefficients are generated by a neural network that is trained end-2-end to generate audio like the training set (conditioned on high-level controls such pitch and loudness).

I agree that IIRs are a great avenue of future study, also with time-varying coefficients. I've played around a bit with them, but they are harder to efficiently train with current autodiff software and GPUs/TPUs. I think they may require writing a custom cuda kernel, but I'm hopeful for things like JAX's scan operation.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: