To me the exciting thing about automatic music composition isn't royalty-free prerecorded music, but the potential for real-time composition. Imagine a video game where your actions instantly affect the music, not just by splicing together prerecorded clips but actually by having new music composed and performed in real time. Or, somewhat more fancifully, imagine a set of headphones that play a continuous soundtrack for your life based on what's happening around you.
A friend of mine did the music/soundscape and worked with the sound-engineers to create layer upon layers of discreet musical loops and sound effects which they then blended together in many different ways.
This is a much better path to follow, royalty-free music market is so limited, and I see interesting tech going to waste, if you want more market data on this, It is available.
yeah, that's a really freaking good idea. One gets bored of hearing the same music over and over again in a video game... and having something plays to the right ambient mood (battle, wandering, etc) would be really cool.
Dynamic doesn't necessarily mean "random". A game with dynamic music could still have a set of common themes. Good composers can blend themes together and make variations on a theme for any mood. There's no fundamental reason dynamic music couldn't be every bit as good, musically, as pre-composed music.
Proteus (video game) has procedural mixing of a pre-set score based on which objects the player is near at. I suppose not much in terms of actual composition (harmony etc), but something which continues to irk me (so to speak) to try to think whether more than that is feasibly doable presently (and not only in the future).
This is a business I know well. Jukedeck is an example of how founders and investors do not conduct appropriate market research. There is a limited market demand for low-cost royalty-free music for videos. One could argue there is an oversupply* of royalty-free music relative to buyers. The quality is not good enough to disrupt the billion dollar Production music industry that is top heavy, a relative small amount of creators at the top get the majority of the money, the rest compete for the little that is left. Jukedeck has raised enough money ($3million #) to be around for a few years if they control their burn rate. But Jukedeck in its current form, is just another music startup destined for the Deadpool. (http://techcrunch.com/tag/deadpool/)
Wow, that was harsh. If startups were assigned to the Deadpool based solely on whether their 1.0 products were ready to immediately disrupt billion dollar industries, there wouldn't be _any_ successful startups out there.
Personally, when I see a team of people who are passionate about what they are doing create something new and interesting, I like to cheer them on.
I think the technology is interesting and cool. Even if it isn't marketable in current form (what 1.0 product is), I wish them the best in iterating on it until it is.
It's realistic. Nothing at all against Jukedeck, nice work and so on, but these automatic composition tools are also available to any other musicians out there, and (as you may have noticed) there's an oversupply of musicians who are demonstrably more passionate than any startup entrepreneur because most of them are willing to keep plugging away at their musical endeavor for its own sake rather than because they have any hope of becoming rich or even making an adequate living. Jukedeck is doomed for the same reason that a startup offering robotic table waiters to restaurants is doomed; there's an oversupply of humans willing to do a much better job for relatively little money.
What's harsh is burning through 3 million dollars while you're trying to figure out what's wrong. If my statments leads to a major change that saves the company that is a good thing.
3 million dollars is nothing. It took a million dollars to make a single very low-quality video game, circa 2008. Salaries are expensive.
The investment model is set up to let founders "burn through" money while they explore new approaches to old industries. The investors don't really care that the money is lost. To be an investor, you have to assume 9 out of 10 of your investments will be write-offs.
So if you're not defending the investors' money, and if the founders are happy doing this, then why are you intentionally being harsh? Let them do their thing. Yeah they might fail, but so what? It's the only path to success.
It's not really productive to try to save people or companies via internet comments. You're more likely to demoralize them than to change their minds. Unfortunately, demoralization is often someone's hidden motive.
I have actually helped a company where our initial interaction was via an internet comment, so don't make assumptions. Stating market reality is not being harsh, there is an abundant supply of royalty free music available at a very low cost or for free. If you're demoralized because someone states the facts of a competitive market than that is very unfortunate and you're not really fit to be an entrepreneur in a highly competitive market. From experience, working on a product that is failing is very demoralizing. I actually will like to see them do something novel and great with their technology.
Moreover there is great ideas* in this thread that can actually help the company.
I actually will like to see them do something novel and great with their technology.
As a developer, occasional FPS gamer, and musician, I'd like to see them tackle adaptive generative music that is actually convincing. I want music that takes cues from the gaming environment without obvious loop splicing points and without feeling mechanical.
While it's not easy to hear, I got one such advice once on a company I started, and in true entrepreneurial spirit ignored it and kept iterating away. It was 100% spot on.
I wish people were more frank about what they think of start-ups, and I wish there was an acceptable way of doing that without insulting people.
In Italy for example is the exact contrary of the silicon valley mindset where everybody will say "cool!". You will likely get criticisms like "it's nice but will hardly work" and so forth. This has the effect of discouraging people a lot. However as you said, the reverse is also dangerous. There must be a middle ground where it's possible to get balanced criticisms that make founders aware of the risks of a given business.
Not only balanced, but constructive and well thought. Saying "this sucks" or "I can't believe anyone invested in this" is useless. The above criticism, while not very constructive, is still grounded in facts (that I can't judge since I don't know the field enough) and goes beyond "this won't work"
You're not considering a handful of other variables here:
(1) Convenience - There's limited market demand for royalty-free music that you have to manually edit and sort through yourself. By virtue of making it simple enough for consumers (vs. professionals) to use, the market got a lot bigger. By way of comparison, you could have said that there was limited market demand for mobile apps prior to the App Store. But the convenience the App Store offered with respect to finding and installing apps made all the difference.
(2) Licensing - This is proof of concept for the overall technology. You could view Jukedeck as a play to simply generate the sort of interest you need to get larger players interested in licensing tech, rather than an end-game in and of itself. E.g. you could imagine Apple licensing the tech for use in iMovie.
(3) Legal risk - It's one thing to say some random music track on the Internet is royalty free. It's quite another to prove it. There's no easy way to guarantee that the supposed original author of a track didn't copy it from someone else. This might seem like an edge case, but per the above points, if Jukedeck licensed their tech to companies like Apple, those legal issues become much bigger deals. Algorithmic music simplifies the legal question quite a bit.
On the other side, if the generated tracks are 100% original, this tool could be of invaluable help for music producers looking for quick inspiration. As an amateur music producer I'd be willing to pay for this if MIDI export is added.
That's a very interesting target, although It's doubtful it will meet the needs of their vc investors. It is an interesting project, but it is in the wrong target market.
They just need to make some numbers (and as you pointed some customer development). The music industry is pretty big and has many niches. I'm 100% sure many companies would LOVE to add AI composing capabilities to their DAWs/VSTis/AUs... including Apple in Logic Pro. They could try to license the technology.
If you're aiming for the musician market, I'm 100% sure you're wrong.
There's some basic algo-comp in Logic and Ableton already. But generally, musicians really hate the idea of having a machine writing all of their music for them.
This even applies to musicians who only work with loops, and to DJs who only work with complete tracks.
I think there are business models for algo-comp, but this isn't the best one. I'm not sure JukeDeck is necessarily heading for the deadpool, but it's maybe 50:50 at best.
Also, the production values on good stock music are very high - higher than here. Try https://www.shockwave-sound.com/ for some examples.
I think JukeDeck isn't quite up to the lowest quality tracks on there, and it's a very long way short of the better tracks.
> If you're aiming for the musician market, I'm 100% sure you're wrong.
Then why at least 3 musicians in this thread have requested MIDI export?
> Musicians really hate the idea of having a machine writing all of their music for them.
You don't get the point. I'm not saying that all your songs should be composed by a NN. This is valuable as a starting point (phrases, chords, chord progressions, melodies, etc). Then you build on top of that (or transpose accordingly). You know, there are those days as a musician when things just don't sound great and you need a spark.
I've been producing since I was 12 y.o. in MS-DOS. I know the market pretty well and I have producer friends that own recording studios and write songs for professional bands (rock, metal, etc). And I'm 100% sure they'd buy the concept because they've previously asked me for related tooling.
The technology is the key here, and could be used as another tool.
Firstly, I didn't mention FL Studio at all. Why do you attribute a notion to me without any sign of me thinking so?
Secondly, there's a clear distinction between taking cues and reusing a harmony or a melody from another source. People have always done it and will do it more than ever, if they have the right tools.
Of course, it only matters if we still think about music as more of an art form than a production of non-material goods. Hope we still do.
I didn't attribute it to you, but your comment was analoguous to this common criticism of some popular artists today, and reminded me of them.
Similar ways of diminishing other people's work are used everywhere, but are typically a result of being more or less elitist. "Not a real X if you didn't do it in way Y or use tool Z."
For some, the process is more important than the outcome (or just as), but I think it's wrong to impose such opinions on everybody else. Just my opinion, of course.
I think this is the perfect example of "failure of imagination"
GoPro is a $2.5B company based around low cost equipment for amateur film makers. They produce film editors that let you edit your films around per-licensed sound tracks.
Let me say that again - you make your film around the sound track they supply.
The opportunity for Jukedeck isn't competing with the thousands of individual composers 1-on-1. It is building a platform that generates sound tracks automatically, and selling the platform as software both to end users and sub-licensing it to companies like GoPro.
But that's how most editors work anyway, even Hollywood ones - if they don't have a score to work with they take temp music from some film they already like and later commission something similar. I'm not sure why you think this is such a radical concept.
At the moment the per-unit cost of per film is constant.
But it's not. There's considerable spread in the price of music depending on the quality, specificity, popularity, and licensability* of what you want to buy.
* up to a particular number of copies made, or with restrictions on sublicensing or other contractual factors.
You make a new film (which includes the entire range of things from Hollywood films, to pro-Youtubers to films you share with a couple of people).
For music you can
1) Get custom music written specifically for your film. This is usually expensive (unless you do it yourself, or know someone to do it for you) but has the advantage (if done well) of matching exactly what you want for your film
2) Find an existing piece of music and license it. This has variable costs (or maybe free), but won't be written specifically for your film and possibly will also be used elsewhere.
3) Use "template" music included in whatever software tool you are using.
AI-generated music has the advantage of (1) but the cost structure of (3).
It'll be a long time before it is useful for Hollywood films. But for Youtubers it is a completely different story.
nah, AI music enters more into case (2) than (1). you're fooling yourself if you think three genres and a couple of variations is equivalent to 'custom music written specifically for your film'
We have had this problem, even with commercially-purchased royalty-free music. The idea of getting generated music that will never trip "Content ID" is nice, since having our customers' tracks pulled is an un-solvable problem for our team and it's really painful from a support perspective.
Plus, the idea that I've paid for legit copyright use and I still have to deal with takedown BS is doubly frustrating.
I would use this for my personal videos. I've had videos taken down from Facebook before because, even though I was just sharing it with friends, I used a track that was protected by copyright.
Being able to construct a track with the right feel without worrying about copyright would be amazing for a simple video editor or on something like Instagram.
Sure, but most people want to use popular music, as a result the most popular video editor for istagram has licensed popular music* from the majors. This is not a novel business.
Interesting. That doesn't scale though, because you can't upload the same clip to YouTube without it being flagged. Having rights to backing music is not great as a differentiator for a social network.
"Just sharing with friends" is still copyright infringement. The owner of the copyright--not you--gets to decide how the work is reproduced and distributed, unless you have a defense like fair use. (I'm not saying this is always a good thing; it's just how the law works.)
Yeah it is, but in the scheme of things it's pretty benign. I couldn't purchase the rights to use the song even if I wanted to, and I'm only using a portion of the track, so I'm not actually denying them income.
Yeah, but you're probably not willing to pay enough to make it profitable if your videos are for personal entertainment only rather than aiming at any kind of significant audience.
From the data I've found Pandora pays around 0.2 cents per stream. I'd be more than willing to pay that (or even 10x as much) to use a track on a video that a couple of hundred friends might see.
I really don't agree. I like the interface and it's simple. I make indie games, but I suck at music creation. As of today I'm a paying member, just bought some menu music.
The point of this isn't having infinite royalty-free music, it's the technology. The royalty-free music part is just a easily-implemented, temporary application that gives it a business purpose, while they try to build up the technology into something great. There's a lot more applications to good music generation than royalty-free music.
According to their pricing page ( https://www.jukedeck.com/pricing ), you can use the track free of royalty, but if you want to actually own it, you have to pay $199.
But this falls kind of in a gray area. If the AI created the tracks, why does the company own the copyright (and thus, have the right to sell it)? In December 2014, the United States Copyright Office stated that works created by a non-human are not subject to U.S. copyright (see: https://en.wikipedia.org/wiki/Monkey_selfie#Copyright_issues ). So, in theory, AI could also own copyright.
Moreover, do they actually check every newly generated track to make sure its not too similar to previously-sold tracks?
This isn't a legal opinion and shouldn't be taken as legal advice.
Interestingly, I think New Zealand law does have this case covered. There's special mention of "computer-generated" works under Section 5[1] of the Copyright Act which states (as I read it) that the person who wrote the computer program to generate the audio is the author of the audio.
EDIT: Had another read and thought that actually it could also be the person that inputted their choices into the computer program that undertook "the arrangements necessary for the creation of the work"... Copyright is a murky section of law.
No, because you didn't create those other files, your enumerated bits won't have the right color ;)
"Bits don't have Colour; computer scientists, like computers, are Colour-blind. That is not a mistake or deficiency on our part: rather, we have worked hard to become so. Colour-blindness on the part of computer scientists helps us understand the fact that computers are also Colour-blind, and we need to be intimately familiar with that fact in order to do our jobs.
The trouble is, human beings are not in general Colour-blind. The law is not Colour-blind. It makes a difference not only what bits you have, but where they came from. [...] The law sees Colour.
Suppose you publish an article that happens to contain a sentence identical to one from this article, like "The law sees Colour." That's just four words, all of them common, and it might well occur by random chance. Maybe you were thinking about similar ideas to mine and happened to put the words together in a similar way. If so, fine. But maybe you wrote "your" article by cutting and pasting from "mine" - in that case, the words have the Colour that obligates you to follow quotation procedures and worry about "derivative work" status under copyright law and so on. Exactly the same words - represented on a computer by the same bits - can vary in Colour and have differing consequences. When you use those words without quotation marks, either you're an author or a plagiarist depending on where you got them, even though they are the same words. It matters where the bits came from." - from http://ansuz.sooke.bc.ca/entry/23
Basically, non-recorded metadata about how a sequence of bits was created matters too. Not just the bits themselves.
Now you've simultaneously copyright infringed all the worlds works by sharing them on the internet, AND generated all other works, making it impossible for anyone else to ever not infringe on your work.
I remember someone created a P2P file sharing system that used 'munges' of files to create blocks of data by themselves that have no meaning (a bit like http://monolith.sourceforge.net/). These blocks were then transferred around the network, and people could claim "I'm not transferring files, I'm transferring meaningless blocks of data".
Sure, but (and I'm not a lawyer), intent of law is just as important as the literal meaning of law. Cases play out this intent and add to the corpus of knowledge as case law. So if you did write a program to generate all the data in the world, I'd imagine people would look at your intent, rather than just what you literally did.
You'd run out of disk space before you even generated "Hello, World". Assuming ASCII only, that's 96 bits. In order to save every possible character combination you'd need about 9903 yottabytes [1] of storage.
If you went through and sorted those which were pleasant from those which weren't, you would own all those you sorted. Good luck getting even one picture that looked anything more than static or pure color.
Here's an—in my opinion—almost exact equivalence: a font is a little program in a nearly-Turing-complete language. Nobody really cares what fonts end up on your computer. But if you create an image (e.g. an advertisement) that contains pixels manipulated by the font-program you ran on your own computer, then you have to pay to license that use of the font.
Within the realm of audio production, there's an even closer equivalence: soundfonts, which are also little nearly-Turing-complete programs. Playing a MIDI file "through" a soundfont? No problem. Putting that track on a CD and selling it? Nope, need to license the font.
Really, you could think of procedural music generation as a really complex and "stateful" soundfont program that has a nonlinear relationship with its input.
But in your examples derivative product depends on creative input which was put into original product. Music generating AI can be replaced by other AI created by different people but on the same principles so it produces similar music. Binaries produced by gcc don't bear GPL license and Adobe can not dictate license on pictures created in Photoshop.
> Binaries produced by gcc don't bear GPL license and Adobe can not dictate license on pictures created in Photoshop.
I don't think there's anything legally stopping either behavior; it's just that neither GNU nor Adobe are in a market position where anyone would put up with that sort of behavior.
US law in this area is interesting. There is no "sweat of the brow" copyright in the US. This was clearly established in Feist vs. Rural Telephone, the Supreme Court decision that it was permissible to copy data from telephone books into a database. The Constitution gives Congress the power "To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries", and the Supreme Court says Congress can't go beyond that in intellectual property law. The US, therefore, has no database copyright, although some countries do.
This was followed by the famous Bridgeman vs. Corel, which established that taking a picture of a work doesn't create a new copyright. Thus, pictures of public domain works are public domain. Despite much huffing and puffing and disinformation by the museum community, that's now settled law. Nobody has gone to court to try to overturn it. (The National Portrait Gallery (UK) threatened to sue Wikipedia, then backed down once they realized they would lose.) There's also Meshwerks vs Toyota; a 3D scan of a physical object doesn't create a new copyright in the 3D scan. That's an appellate decision and reaffirmed Bridgeman.
So, for copyright in the US, there must be an Author. (This can be a corporation, but that comes under the law on work-for-hire; the individuals involved are the initial authors but the rights accrue to the employer.) You can make a strong argument that under US law, works created by computers are not copyrightable.
> You can make a strong argument that under US law, works
> created by computers are not copyrightable.
I'm genuinely not sure what "works created by computers" means.
I don't know exactly how Jukedeck works, but it seems self evident that it is the product of a huge number of creative decisions - which sound samples to include or generate, which sounds can be combined in a pleasing way, which melodic patterns are appropriate for various genres, how melodic patterns are modulated over time, etc. etc.
It's not as if the Jukedeck team created a general purpose AI and said "go make some music". They designed a system that can generate a limited (though large) range of music based on their own sense of creativity, taste and style. I argue that the music generated by such a system is clearly creative expression and thus subject to copyright to the same degree as traditional creative expression.
I do not think "sweat of the brow" decisions are relevant here. The lack of "sweat of the brow" copyright means that simply performing labor without a creative element does not qualify the product of that labor for copyright protection. There must be a creative spark present in the generation or transformation of the work.
The fact that, with Jukedeck, the creative spark happens at the time of producing and editing the code rather than at the time of the code generating the music is not relevant in my opinion.
Digital tools are used all the time by artists to create works that they own. Doesn't this count as a more complex tool to generate digital works that they own?
Kind of like a studio letting you use their equipment under a prior agreement that anything you generate they own until you purchase it.
Their marketing copy puts it into a murky area, too. "You have control over your music, so you can create tracks that do exactly what you want." That certainly implies you're the creator, in the same way that an artist who uses tools in Photoshop to make a picture owns the copyright on the picture, rather than Adobe (or perhaps more pertinently, someone who uses Garageband loops to construct a song owns the copyright on the song, not Apple). So I agree - the 'pay this much and it assigns you the copyright' seems like a bit of a dubious claim...
"If the AI created the tracks, why does the company own the copyright (and thus, have the right to sell it)?"
The same reason artists like Autechre deserve to own the copyright to music they created using algorithms. They wrote the software that made the song, it's just somewhat more abstracted version of composing isn't it?
> If the AI created the tracks, why does the company own the copyright
Uh, do you think there's a sentient being here? Its just a relatively simple algorithm here (at least compared to sci-fi style AI), the same kind of thing that decides how to autofix colors in photos or do a transform in photoshop.
Dynamic music generation has been a thing since at least the 80s. I think I had an Apple// program that did this.
>works created by a non-human are not subject to U.S. copyright
I'd like to see you prove some silly music generator is on the equal footing of a living and intelligent animal to any court. I can't imagine you not being laughed out of courtroom.
The program wouldn't own the copyright, but that doesn't necessarily mean that the company which made the program would own the copyright. It could end up public domain.
E.g. the recently infamous selfie by a monkey was ruled public domain, having been produced by a non-human.
I don't get that monkey selfie ruling. If the photographer deliberately gave the monkey a camera to take pictures, shouldn't the photographer be the owner of it? If the monkey isn't a legal entity, why is it considered the creator? If I dropped my camera on the floor and the button got pressed, is the floor now the creator of the image? If I set the automatic-timer function on my camera, is the timer mechanism now the creator of the image?
Doesn't hold up. Library of Babel and the Universal Slideshow would be able to claim copyright on all photographs and literature because all photographs and all literature that ever will be or ever was is contained within their corpus/gallery.
Every picture you've ever taken, the Universal Slideshow already contains that picture. The picture of your birth, every memorable moment of your life, and every possible variation of your death.
Everything is already created. In regards to the Library of Babel, you can even search the library to verify this. (There is an extended Library that contains entire novels to be searched - the web version is a bit smaller/limited to pages.)
It's already there it just needs to be found. That's the entire point!
This is a false claim. The Library of Babel is far from complete; the site itself only claims completion up to 3200-character texts. And that's limited only to lower-case alphabetic letters, periods, commas, and spaces. So there is still a vast amount of possible texts that haven't yet been created.
Irrelevant. It includes numerous works within those constraints that would ordinarily be subject to copyright. I tried it with a variety of short poems I know, both in and out of copyright, so there's more than enough amterial here for a test case, even though it doesn't include all possible material.
The Library of Babel that is online has those limits, yes. Which is why I explicitly mentioned I wasn't referring to the website.
>(There is an extended Library that contains entire novels to be searched - the web version is a bit smaller/limited to pages.)
The creator has variations of the code that aren't online. For example, lower-case alphabetical letters was a choice to "remain true to the original concept" but a base64 variation that allows for capital letters is possible. Furthermore - searching for entire books rather than being limited to 3,200 characters is also possible. Also other languages would also be possible but require further variations of the code.
It's just having this work on the web to a website that sees 30-40k daily visitors because the algorithm isn't fast enough to meet those demands.
Rate limited in posts. My reply to the below is as follows:
Your claims only hold for the web version. I was not speaking of the web version. So you are explicitly wrong. There is a version of the library that is not limited to those characters, that is not limited to lowercase Latin alphabet, and is not limited to being searched 3,200 characters at a time. So what are you contending by making such statements?
Yes. The web version is held to those limits. Which is why I explicitly mentioned I was not referring to the web version. Your claims only hold against the web version. So I'm not seeing the point you are trying to make here.
E2:
I hate how HN rate limits like this. :)
I'll concede that. Does ASCII art of a Chinese character represent the same information as the Chinese character? That's stretching things so I'm not making that as a counterpoint - but more of a thought experiment.
I'm not challenging the claim that it contains a lot of texts; I'm challenging the claim that "everything has already been created and just needs to be found." That claim is explicitly false, regardless of the version of the Library of Babel you are using.
Edit: Reply to your reply:
One inherent limitation with the current code is that it has to assume some encoding. Currently, there is no encoding that contains all known glyphs (e.g the Prince symbol, uncommon kanji, etc) so there will be texts that can not currently be generated, regardless of how much you increase the character set.
Edit2: Even if one were to allow ASCII art to represent a character, you then have the problem of how to distinguish between ASCII art substitution and actual ASCII art. Consider "Densha Otoko," which basically consists of message board posts that often contain ASCII art.
There is no point in the process where a human puts a creative stamp on the result. Your chiptunes did. This is effectively a fancy PRNG tuned to create appealing patterns. Copyright doesn't apply to non-creative works.
It sounds like one of those recursive ANN things. On the scale of seconds, it's quite good. But there's no higher-level structure. For "folk", this is obvious; for ambient, you'd probably never notice.
The goal for this technology is to beat the two guys who write most popular music.[1] Another few years.
Your electronic offerings are OK. While I'm skeptical about your business model, as it stands, one possible pivot would be to pitch it at musicians as a dynamic arrangement/compositional tool.
I would very much like to try it with MIDI output, as I personally enjoy tweaking synths and effects more than the compositional tasks of harmonizing melodies etc.
This is incredibly cool! Really well done. The entire experience with creating and listening to the tracks feels really good. I'm going to be using this for my next product demo video.
It's crazy how good the tracks sound.
One thing that would be nice is the ability (like some others have stated) to tweak the generated sound. For example, there might be a lull at 0:24 but given my video it would be best for the lull to be at 0:52 instead. Would be cool if the layers/pieces were moveable a bit to make it perfectly fit the content it's paired with.
I agree, I'd love to be able to tweak tracks after they're generated.
Also I noticed in long tracks, like 5min+, that it gets a little repetitive, like the algorithm is trying to pad it out. At least in the few I created, could be a fluke.
It does sometimes get a little repetitive - that's something we're working on improving. This is one of the reasons we've limited tracks to 5 minutes long for now.
Feature Request: Allow playback of the track that has a row for each of the instruments that you can mute/unmute to isolate. Then allow regeneration of a specific instrument within the track.
I think it would be pretty cool to have this as a streaming service. Let me listen to it as it is generated, and then slice out pieces if I really like them.
I don't personally like the music, reminds me of Band in a Box, but the execution is good. Wonder what the stack is - LSTM creates the midi tracks, linux based DAW with pre-set instrument/effect chains bounces that into an mp3?
There should be a fair bit of variation - the music is composed note by note and chord by chord, which means there are loads of different possibilities. But increasing the variation is another thing we're working on!
If I take the buy the copy right option, does it mean that the AI won't ever produce the same song again? And how do you handle not occasionally producing duplicate songs when someone else enters similar settings?
I was wondering, what would happen if the algorithm create an existing riff and use it in a composition? Gotye use a riff from "Luiz Bonfá - Seville" in "Somebody I Used to Know" and it costs him 1M$
Short answer:
I'm sorry, I'm against the tech/business approach of this project.
Notes:
I'm an ambient electronic music maker (and software maker too). I saw a lot of similar projects since Brian Eno's generative music project. I have been also interested in making algorithmic music (using some AI/artificial neural network schemas), using amazing Supercollider, for a while.
I never achieved interesting results, in terms of "deep energy" instead "embedded" when music is made by "special" humans (so called "artists").
Full stop.
Experimenting soundscapes CREATION, in recent years I was back to what I call (along with Steve Roach): "analogic approach": sonic seeds as analogic waves (electro-acoustic) -> (digital) elaborations made by human(s) artist. No MIDI. No "samples" usage as-is. No presets.
I could call the musical secret as a case of human intelligence: "search and discovery" of unknown.
Because, this is the point, music is discoveryng of mistery.
BTW, I do not want to enter in the loyality-free / real-time composition topic. So long discussed for so many years among electronic music communities!
Good music, is like science inventions: come from "singularities".
That said, an interesting point, for me, is the fact Artificial Intelligence could help musicians to make music. Ok, but this is another story, another vision of what music is,
for humans,
for machines
When I first read the headline this is what I was hoping for. Auto generated, upbeat, electronic music, that continuously changes, which I can listen to it all day while at work. I need music playing all day, and these days I've been trying to find electronic music that is non-memorable yet upbeat so that it's motivating but not distracting.
Instead I got a 5 minute long song. For my needs, I'll stick with soundcloud "Dj sets" which seem to go on for a few hours each.
That's a really good idea, and something we'd love to provide. We've only set the 5-minute maximum in v1.0 - there's no reason we couldn't give people access to more extended music. Hopefully when we do you might be able to use it at work!
For context - I'm building autonomous Escape Rooms that react to the player's actions. We currently have several long tracks that fit various sections of the games, and we switch between them at story junctures, by playing loud sound effects that mask the transitions.
I would pay in the order of thousands of $ (pounds in your case...) for this system running locally in my network, and in what I would call 'continuous mode'. By which I mean, I give it initial conditions and the track just evolves continuously forever[1].
Ideally, I could then send additional configuration messages and have it evolve further. e.g. Start with ambient/sparse. After a few minutes, I send a command to transition to ambient/sci-fi. Then later, to electronic/aggressive, with a seamless transition between. Even better if I can have a command that is essentially "react to this message", and it does something like a cymbal clash or whatever is appropriate for the settings.
I've done improvised theatre shows before with a jazz band that did all this for us, and it was AMAZING! Being able to do it with an automated system would be a game-changer for my business. I've been considering writing it myself, but it would cost at least tens of thousands of dollars worth of my time, and our current solution is sufficiently adequate that I'm not ready to do so yet.
[1] Ironically, given your current business model, I'd actually have almost zero interest in retaining the music after it's been played.)
On a positive note, I really enjoyed all the tracks I did make!
Thanks so much Schwolop - glad you like the tracks you made! Do you have a site for what you're doing? I'd love to get in touch about a potential collaboration.
Since we're B2B and know who our customers are, we have only a very sparse site at www.cubescape.com.au, but you can email tom.allen at that domain to get in touch. Cheers.
The output is interesting... decent enough for an ambient background track! An interesting Version 2 would be a way to construct a song for a particular video (perhaps ingesting the Adobe Premiere project file)... Making sure the downbeats are synchronized to visual cuts, being able to switch moods when the video calls for it, and perhaps even tweaking the video timing to match the rhythm (a Premiere plugin could allow the editor to specify an approximate end time for a clip, and Jukedeck could tweak the length to be an exact number of beats).
Thanks - really interesting idea. We can already make the exact duration and number of beats work, so it would be a question of lining that up with various points in the video.
You could ask for a naming convention in Premiere -- so that various clips or chapters are tagged with #dark or #exciting in their title, and then Jukedeck would play the appropriate type of music at the right time. If you can't get it lined up perfectly, you should transition the music before the video cut, not after.
The start time for a clip is much more important than the end time, so you should adjust the cutoff time for the clip by shrinking it from the end (never expanding it, as clips are often truncated when the footage becomes unusable, i.e. when someone walked in front of the camera, etc.)
I can imagine a Premiere plugin with a stored credit card, and you guys doing a brisk business that way. You could even then become a marketplace for royalty music, have a store where people can upload their music tagged with moods, and video creators could see royalty-free options from Jukedeck or royalty options from third parties.
Places you can add unique IP are: the ability to extract a mood from the video clip itself (most likely through color grading, the speed of pans, the frequency of cuts, and whether it's tripod or handheld), and the ability to automatically extract a mood from 3rd-party music, so I can automatically see the pop songs that are best fits for my video.
Find a way to do this without having to upload the actual video file, because that will kill the experience. Work-in-progress video files are often multi-gigabyte.
The believe Shred Video app does something similar. You select an audio and video file, and it splices the video content to match the beats of the audio.
This could replace my spotify subscription at all. It's exciting to hear a song that I and a software have made for the first time and maybe I am the only one who ever listens to it*.
Michael from Vsauce talked about 'will we ever run out of new music?' https://www.youtube.com/watch?v=DAcjV60RnRw
Really cool product! Lots of good feedback on the page so I'm sure that's exciting.
A lot of people are mentioning video games, but another interesting use case for real-time streaming would be for running. There's tons of data on my smartphone that can help inform where the music might want to go next. Would definitely be cool to have my own personal band synced with my body's running rhythm.
Thanks @Dwolb - great idea, and this is actually one of the things we're thinking about. We'll be announcing new products as and when they're released on our mailing list!
This is fantastic, could we get longer tracks? I want to use this for generating music in our corporate phone system, right now you get a randomly chosen ambient track, and this could let us provide unique music for anyone who has the [dis?]pleasure of being stuck in our phone system. Maybe even a "press * to listen to a new song" option.
It'd be interesting as a musician that often plays just by myself if I could record and upload it somewhere and have something to add accompanying music. So say a guitar track, and have an "AI" that could add drums & bass that fit the timing.
What @m1sta said... As a starting point for music production. Royalty free 100% original is a huge selling point if you know where is most needed.
The market for a tool like this is decent sized, specially for Electronic Music. You could also bake the technology in a VSTi or AU (although the webapp a la beatport might be much better), or even license the technology to big players in the music production space, like Apple, Yamaha, Sony, Steinberg, etc
Not OP but I second this request. For a composer generated music can be a fantastic starting place. I'd love to be contacted if/when such a feature was added.
No need to search soundcloud or google for background music for your youtube vlog or intros. It's kind of nice that you can even slice to size right off of the bat. This does seem like a time saver for that market. Either way cool idea!
The music it generates in Electronic - Aggressive - Drum and bass - 130-135 bpm could replace my spotify subscription, pretty good stuff and when you get tired of a track, you just generate a new one that you can download Awesome!
Honestly, I spent a good amount of time on it trying to think how to create a video that matched the music I was 'creating.' I also saw myself creating a long song just to play in the background. Cool stuff.
Thanks @colmvp. If you do make any videos using the music you make, send them our way! And if you're interested in creating long songs, how long would you like them to be?
tl;dr: Frequencies that are related by some simple ratios are defined as "intervals" which are organized into scales. A handful of those make up the vast majority of popular western music and "sound good together".
The rules are different for different styles, and they're hard to get right, because musical structures are holistic, and multi-layered. The domain of interest is at least as long as a phrase, and can be as long as the entire piece - which means that potentially every note is related to every other, and the data structures you use have to understand how and why.
I'm finding the tunes the least successful element here. They sound like a fairly naive Markov sequence/chord spine mashup, with none of the character of really good tunes.
The sounds/production are much better. The drum parts are midway between the two.
>I find an interesting parallel between 1/2/4/8 repeating beats/parts in songs, and binary...
There's more to it than that.
Good algo-comp is PhD-level work. Kemal Ebcioglu, who made one of the best efforts to date, had to add a lot of hand-rolled heuristics to an expert system that was trying to produce classical counterpoint. He was last seen working for IBM research on compilers and architectures for parallelism - a very smart guy.
It's easy to underestimate how hard the problem is.
My background is in music theory, and, as others have commented here, this is a question that there's no clear, agreed-upon answer to yet. Music theory can tell us what composers tend to do - which chords they choose should follow each other, which notes to use when - but it's not as good at telling us why those chords / notes sound good together.
In building Jukedeck, we're applying our own theories about what makes good music. And, like any musical composition, it's an experiment!
re: what determines what sounds good, in some sense you can read up on music theory for this (scales / keys). in a more basic sense ("why is this sad and this happy?") it's actually an open and actively researched question in the physics of music. it's anyone's guess, really, why certain melodies are pleasant. is it a physical/structural/formal feature of the sound itself? is it a feature of our psychology or neurological makeup? who knows.
This is so awesome. Does anyone remember Algomusic? It was a PD program for Amiga from the early 90s that did generative rave tunes. This reminds me a lot of that, in a good way. Great job!
If I were doing this, I'd be interested in finding ways to find out more about what people like and don't like in the generated songs and then use that to improve the algorithm.
Will look forward to drilling down into your samples, once the site stops getting slammed. In the meantime, you may want to replace the live auto-generation demo with a genuinely random (i.e. non-upvoted, or otherwise curated) pre-generated samples (that is to say: fairly representative of your automatic generative process, even though they happen to be pre-recorded).
Being as every second is precious, to a first-time user of your site. And as we know, "Any sufficiently advanced technology is indistinguishable from a rigged demo."
Thanks for this - just to check, which demo are you talking about? Do you mean when you create a track straight from the homepage? Because if so, that's not curated!
Yeah, the demo on the front page where you get to choose from a couple of parameters. From a UX standpoint, there's no reason those samples need to be live-generated. A random pick from a large number of pre-generated samples (i.e. a large-ish pool of static samples, for each parameterized setting) will have the same effect, and give the user a much quicker response.
Ah - understood. The generation process is actually exactly the same on the homepage as on the dashboard, so what you're hearing is a genuine, non-curated example of our music!
That's right - sorry, we've obviously got to improve our copy! The quota is only applied to downloads - you can create as many tracks as you like and listen to them for free.
We want both non-musicians and musicians to be able to use Jukedeck, and this affects the kinds of controls we present to the user, which are quite different from in Noatikl. And we also offer use of the site, and a bunch of music, for free!
Is Jukedeck looking into real-time composition?