If you trained a model for this, how would you avoid having to run the entire training process again every time you needed to add another song?
I wonder if there's a way to build an embeddings model for this kind of thing, such that you can calculate an embedding vector for each new song without needing to fully retrain.
You'd just have the network generate fingerprints for any given song similar to how facial recogniton is done
Siamese networks are what you want, two identical pairs of layers (one cached in this case) which act as the fingerprints then then the final layers are doing the similarity matching
People who are highly skilled at this, can be easily stumped. Sure it might workfor artist who are more focused (tailor swift), it might pick out some interesting guest appearances (Eddie Van Halen on Beat It) but when you get multi talented performers who change everything about what do, they don't fit a "model". The most current example would be Andre3000's latest release.
Um, yeah, you won't be able to model artists who don't follow a model (especially done so deliberately). As you say that is true of humans or computers alike. But it's not the problem anyone cares about and not what the parent comment intended.
Certainly a well trained model will be able to have incredible accuracy just with vocals alone. It will be able to identify Lady Gaga regardless of whether she is singing a new art pop track or old standard with Tony Bennett.
We could have a debate about the consistency of Gaga or Taylor Swift and profit a motive (and we could go all the way back to composers of the classical period with this).
I could also point you to Diplo who, as a "producer" is responsible for diverse sounds with his name directly on them and then side projects Like Major Lazer or MIA's paper planes that have his hallmarks but aren't "musicaly" linked. How about the collected work of Richard D. James, I'm no so sure that all the parts fit together outside the whole of them.
Stuart Copland was the drummer for the police, a very distinct and POP sound. Are we going to be able to use ML to take those works and correlate them to his Film scores? How about his opera? Dave Grohl, Phil Colins, Sheila E, more drummers who became singers, what is the context for ML finding those connections (or people).
John Cages 4'33 is gonna be an interesting dilemma.
DO you think the player piano black hole sun, and C.R.E.A.M cover from Westworld are picked up as stylized choices by Ramin Djawadi, and would it link those to the sound track of Game of Thrones?
Even with all the details it's sometimes hard to believe what talented people can do and how diverse their output can be!
but if it is not that would be extremely impressive! determinism/freewill reduces to shazam!?
whats the training data to predict new song titles? heh
check out this reply from claude2:
>predict the next 3 new song titles from artist Taylor Swift
1. Last Dance with You - A reflective ballad about finding closure after a breakup.
2. Never Getting Back Together - A pop tune emphasizing that the same mistakes won't be made twice in a relationship.
3. 22 Was My Prime - A lighthearted look back on her early 20s as carefree years that can't be replicated.
Whenever music is mentioned in conjuction with technology, one artist seems to always - in a very literal sense - pop up like a zombie in a B-movie...Taylor Swift. No idea who this person is or what they do but they appear everywhere, all at once.
I wonder if there's a way to build an embeddings model for this kind of thing, such that you can calculate an embedding vector for each new song without needing to fully retrain.