It's funny how a bunch of these streams have actually become communities in and of themselves, with people coming in, chatting about life, work, heartbreak etc. while recommending new artists to check out (esp. if you are a fan of indie music, these live chats are gold). Most of them are also actively administered and have very little tolerance to bullshitting/harassment so they end up as a great place to hang out. I usually pair RainyMood [1] along with one of these streams while working and that has a very calming effect and is a great multiplier for my productivity.
Truly reminds me of a IRC groups I hung out on a decade ago, and this is the closest to a Web 1.0 forum-like environment built around music.
From my little slice of amateur producer internet there's also dramatic controversy around the genre. Issues such as how much sampling can you slap together and still call it your own piece, excessive vinyl effects, arguments about track lengths and loops, what it even means to be lo fi. It's fun to watch and results in great music! And also hence why lofihiphopbeatstostudyto has pretty much infinite material to work through.
Near the start of the corona lockdown, Will Smith published 1½ hours of chill/lofi hip-hop by various different artists on his channel: https://www.youtube.com/watch?v=rA56B4JyTgI He even did a Will Smith/Fresh Prince-themed take on ChilledCow's aesthetic. Pretty good stuff.
I enjoyed the Switched on Pop episode on Lo-Fi. I don't listen to it a lot, and I didn't know much about its origins, but I'm a big fan of J Dilla and Madlib, who they cite as major influences on the genre.
Spotify chill-hop/"music to study to" playlists are popular and people listen to them repeatedly and for long stretches of time which makes them lucrative for streaming revenue.
I know someone who runs a chill-hop "record label". Their most popular songs get on big spotify chill-hop playlists and have ~10 million spotify streams. This equates to around $50-70k USD. Most of their releases have a fraction of this but it is worth it for the odd hit.
My guess is they give artists exposure in exchange for a share of the profits through their channels. I don't know if they can monetize the free stream on YouTube, but they also stream on payable streaming services and sell albums on Bandcamp.
I can readily admit to looking for random 1-hour videos of Lo-Fi music on Youtube that really do sound like what I'm able to "create" with this player.
While this is cool I have a general question to people here:
How do other people here handle tensorflow as a python dependency? People use it like it's any other dependency, but it frequently adds unnamed amounts of dependencies, creates conflicts every now and then, is just massively huge and for the longest time of its 1.x existence was constantly breaking some PEP's causing problems with third party tools. It's kind of poison to custom docker files. How do you guys handle this situation? This isn't really my responsibility but whenever I deal with ML engineers doing tensorflow stuff I feel like I'm immediately also a plumber.
Python Virtual-env
You create one new virtual environment for each different project, and you throw them away and build them up again once a problem appear.
I tend to use Anaconda where possible, makes such dependencies easier to manage (but still not trivial... though you can change cuda runtimes with Anaconda, the actual driver is a system-wide dependency, of course).
On my MacBook, I create a separate conda (Anaconda) environment for TensorFlow.
A few years ago I bought a System76 laptop with a 1080 GPU. I rely on the System76 update system to keep Python + CUDA + TensorFlow all up to date. I manually install PyTorch for when I need it. This works well because I only use this laptop for deep learning.
Have you dealt with this in production? One of our data scientists suggested pulling in TF as a dependency into a mainline prod service just yesterday and there were some suspicions from ops people that it could cause trouble but nothing concrete. Would appreciate your perspective on that if you don't mind!
Virtual envs are very practical for this as it does not mess up with the global python env.
Also for inference / production, I just export my models to ONNX format which is more easy to work with thanks the available runtimes: ONNX Runtime for C++ or ONNX.js.
Does the Magenta project used here have models for musical instruments identification or can be used to create one? Does anyone have better suggestions for the identification of musical instruments from audio? I'm trying to solve the need gap -'Display musical instruments used in a song' posted on my problem validation platform[1].
Vibert is a badass! I met him through a friend last year and for every project released he has a trove of awesome demos lying around. Great work dude, excited to see what you do next.
Every time I see one of these projects, it gets me more and more excited about magenta/TF.js. I feel like we're just scratching the surface with these kinds of projects, and they're already awesome.
As someone who works on model serving infra, there are so many tricky problems teams run into around running inference that—at least in theory/in part—seem like they could be elegantly circumvented by running certain models directly in the browser. I really hope to see more development in this space in the coming years.
The most famous channel of it that I know of is ChilledCow: https://www.youtube.com/watch?v=5qap5aO4i9A
But there are others such as:
SynthWave: https://www.youtube.com/watch?v=eVcMequS9vE https://www.youtube.com/watch?v=p-Jdm0H-A9k
NeoChill: https://www.youtube.com/watch?v=kx63aT4UvDI
And many more. Great background music for coding