It used similar techniques, using a note-by-note Markov Chain on MIDI to generate music similar to an initial piece of training data. The difference with his model is that it's only trained on a single piece at a time. This leads to significantly more coherent music, but at the cost of making it effectively a variation on the original piece.
The biggest challenge in this kind of work is trying to get an overall structure for the entire song. In talks at the university, Engels has described the output of his model as that of a "distracted jazz pianist"—the moment-to-moment melodies are coherent but the song lacks overall form and direction.
It may actually be easier to model "coherence" by using natural language processing on the midi file pre-render. It is very hard to get coherence features that work across an entire piece. Definitely worth exploring (and I need to learn more music theory).
It used similar techniques, using a note-by-note Markov Chain on MIDI to generate music similar to an initial piece of training data. The difference with his model is that it's only trained on a single piece at a time. This leads to significantly more coherent music, but at the cost of making it effectively a variation on the original piece.
The biggest challenge in this kind of work is trying to get an overall structure for the entire song. In talks at the university, Engels has described the output of his model as that of a "distracted jazz pianist"—the moment-to-moment melodies are coherent but the song lacks overall form and direction.