Hacker News new | past | comments | ask | show | jobs | submit login
Ableton Live 11 (ableton.com)
198 points by alphadelphi on Nov 11, 2020 | hide | past | favorite | 266 comments



As a UI and UX nerd, one thing I really love about the digital music production scene is that software companies writing DAWs and VSTs know their audience and strive to give power users all the tools they need to be as productive as possible. Knobs, curves, programming (Max for Live), integration with external tools.

In other fields of software development instead we dumb everything down to the lowest denominator, and power users can't be at their most productive because we're scared of complexity and build around the casual, computer illiterate user.

Imagine what Ableton would look like if they were only optimising for the user that wants to put a couple of pre-made loops together and call it a song.


You'd end up with GarageBand. It all depends on the target audience. If you have a piece of software meant to be used by experts, it should be powerful enough that it doesn't restrict them.

I disagree that software has a tendency to dumb everything down to the lowest common denominator, simplifying and automating things is the purpose of software in the first place, but there's a huge amount of software out there that is as powerful as it needs to be. To name a few examples: Blender, DaVinci Resolve, almost everything Adobe makes, and JetBrain's IDEs.


It's actually amazing how consistent the pattern is for professional creative apps, I did a study of market leaders across various industries[1] (based on the best data I could fine), and every single one supports some form of extension. Just doing a quick eye-ball glance, and I'd guess every single one of the apps I looked at also supports some form of scripting.

(This is only tangentially relevant to this thread, but the iOS App Store doesn't allow extensions or scriptable apps, and I believe those rules will single-handedly assure iPad only ever has marginal impact on creative fields until those rules are removed.)

1: https://blog.robenkleene.com/2019/08/07/apples-app-stores-ha...


Anyone who works in a specialized field (or hobbyists) can tell you the iPad will never be a "pro" device for any serious work outside of art/digital painting. iOS is simply just too limiting, and Apple intentionally neuters the App store to protect the lowest common denominator from shooting themselves in the foot with malware.


Since music making is art I guess I won’t disagree.

There’s a large ecosystem of iOS music making apps and the iPad is actually an excellent interface for some applications, e.g. a kaoss pad.

For instance have a look at what Korg and Moog have available in the app store. Or check out drambo.


I think you could have a really good iOS DAW, I think Apple will release Logic for iPad pro at some point. But yes some of the iOS architecture is a bit weird, in terms of apps/plugins/files silos.

I think the "no windows" and sandboxed apps/files paradigms are very limiting for pro work.

And you really need lots of customisable keyboard shortcuts for most pro work, most iOS apps are extremely lacking in this department.


FL Studio mobile is already quite capable, and I'm sure Logic for iPad would be great. But most DAWs make heavy use of plugins/VSTs to provide extensibility to the environment, and I highly doubt that will be supported. Why would you use an iPad over a MBP for that kind of thing? I can't imagine the touch-screen gives any sort of real advantage.


iOS does support audio/synth plugins in the form of AUv3, it's a little bit clunky but works.

I work primarily with Logic (been using it for 20 years) but the form factor of an iPad is pretty nice - it fits perfectly on music stands and laptops don't. If Macbook screens would tilt 180 degrees, all the way flat, you might be able to use them on music stands too - but they don't.

If you are both playing and recording a real instrument at the same time, having the iPad on the the piano/music stand in front of you is very useful. Many people use Logic Remote on iPad on the stand, and Logic on MBP for actual recording at the same time. But not sure why you should need two devices.

Touch screen is nice for some things too, like mixing or quickly toggling things.


I wonder how iOS Logic Pro would handle plugins.


The same way as iOS Garageband handles AUv3 now, I assume.

AUv3 plugins are meant to be able to run on both MacOS and iOS in the near future (this has caused some concern in the plugin dev community, because of different pricing models). Just in iOS they are apps you install from the app store, because everything that executes code needs to be a sandboxed app in iOS.


Okay, I do see why it would affect pricing then if they are forced to be in the App store. I always thought that they would be handled the same way as desktop.

In any case, I am still waiting for Logic Pro or Ableton to be ported to the iPad.


I have been using Ableton Live for music production for over a decade, and I can say that if it were ported to work on my iPad, I’d use it every day.


Or Android.

Actually I was impressed with what people do in GarageBand on ios. Unfortunately I haven't found anything comparable for Android. Only ridiculous $20/month apps.


Blame Google. Android has a craptastic audio stack and only Google can fix it.

Google would have to rip the heart out of Android and fix it for real time programming properly like OS X did way back in like 10.2?.

However, don't get me wrong, iOS isn't great--it's just at least possible. The fact that you can't actually send a defined packet (just a bag of bytes and you don't know how many until the call--"HAH! HAH! You device decided to prioritize something in the OS and your latency just quadrupled--sucks to be you.") to the Audio system on iOS is infuriating.

However, at least it's possible on iOS. On Android, not so much.


> Android has a craptastic audio stack

Didn't know. Looks like I'm little closer to replacing my android with iphone


Do you use your phone to create music?


I liked videos on "iSongs" youtube channel and wanted to play with GarageBand to try some ideas without full-blown DAW. I am just starting and looking for what works for me.


What advantages do you think it offers over using a MBP?


1) The launch grid (I forget it’s proper name) is an increasingly common pattern in touch-based music apps; I want to touch the Ableton one. Especially in live performance.

2)The form factor of an iPad makes more sense in my music setup, in that I don’t need to haul podium or standing desk on stage as well as my instruments.

3. In producing tracks, most of my time is spent twiddling knobs, adjusting levels, etc, rather than dealing with external audio interfaces or doing fine grained, fiddly things with a mouse. But now iOS supports a mouse, so I could even do that...


You should check out bitwig studio on a Surface tablet! They have had an optimized Multitouch UX mode for a few years now, and it looks pretty good.


IIRC Bitwig was actually featured in the Surface launch event (I think it was the very first generation of the Surface tablet, I might be wrong though).


One advantage is that recording drum pad sessions would be more convenient on an iPad (you could just tap out a beat right on the screen). On the other hand, there's something to be said for high quality midi controllers that allow you to do the same.

For instance, an iPad would be a great place to experiment and get an idea down, and then you could bring the idea to your other, more professionally oriented tools.


> everything Adobe makes

When Lightroom moved from v6 (iirc) to Creative Cloud a stunning amount of functionality seems to have been lost for the sake of making it idiot friendly.


I was really let down when Adobe After Effects removed the ability to randomize "effect groups" (for lack of a better name, basically a group of layered effects that creates a single, easily recognizable effect, such as "grainy movie film").

I spent so many hours playing with that thing. Don't get me wrong, I still do, but now I have to make all those tweaks manually instead of rolling the dice and hoping for a gem. Sadly, I don't find nearly as many gems as I used to when I was able to roll the dice.


I haven't used AE myself, but I use many of the other Adobe products and I know that many (perhaps all or close to all) of them have scripting support, so I decided to do a quick Google search for AE scripting guides.

A couple of results came up, one is an official PDF about scripting AE CS6, the other a website about AE scripting. I assume most of the stuff from the AE CS6 PDF applies to AE CC as well.

https://blogs.adobe.com/wp-content/blogs.dir/48/files/2012/0...

https://docs.aenhancers.com

From the PDF:

> When you use Adobe After Effects, you create projects, compositions, and render queue items along with all of the elements that they contain: footage, images, solids, layers, masks, effects, and properties. Each of these items, in scripting terms, is an object. This guide describes the ExtendScript objects that have been defined for After Effects projects.

I think it might be worthwhile to look into whether scripting might be used in order to randomize effects and effect parameters. My immediate guess would be that yes it would be possible. I didn't look into it closely, and if it turns out to be a dead end then I apologise for that but anyways I figured I might mention this in case it might be useful to you. Also, of course since I didn't look into it closely I can not make any guesses as to how much work it would take to get the type of functionality that you want.


Lightroom v6 is still available as Lightroom Classic. Lightroom CC was a rewrite, and they are definitely playing catch up.


No, you'd end up with Fruity Loops


That would have been a fair comparison 10 years ago. These days FL Studio is no less advanced than Ableton in some regards (e.g. the mixer, etc).


As a long time (15+yrs) user of both. FLS has better ergonomics in a lot of places. Especially piano roll (slide MIDI notes) and arrangement view (automation as standalone clip). Plus the razor tool, my god...

FLS doesn't have a max for live equivalent, but I'd guess 90% of people don't use MFL. The edison recording workflow is hot trash IMO, or else I'd pretty readily move back to FL.


> the razor tool

I really really enjoy using Ableton and have stuck with it for years now, but one of the first things that struck me once I swapped from FL Studio was how stupid it was that Ableton's piano roll lacked this fairly basic yet crucial tool..


Absolutely love this about Ableton and that’s the reason it’s my main tool.

As a creator it allows me to flow and shift quickly between different modes of production. It emphasizes speed and intuition, and it allows me to excute powerful moves in far fewer steps than every other DAW I use (Pro Tools, Logic, Cubase).

With the announcement of track comping (long overdue) it resolves my biggest pain point.

Particularly interesting is it’s newly integrated tempo-follow feature which allows the DAW to follow the tempo of an external audio source, such as a live drummer. This bridges the performance gap between live bands with tempos that fluctuate and electronic instruments that typically adhere to a rigid clock source. It goes a long way towards resolving a long-standing tension between live musicians and sequenced electronic music. I’m thrilled to see Ableton doubling down on it’s essence as a performance tool and cannot wait to try it out.


Sounds like a great feature. Elsewhere in this thread I asked for advice on how could I take two mp3 recordings and synchronize their tempos and beats but couldn't get a very clear answer, except that it is a "very basic task".

Reading your comment about this new feature it seems obvious to me now that if the tempos in those mp3 recordings vary independently, like they do in much recorded music, it would be impossible to sync the tracks without this new feature. Right?


Not quite. I think we're discussing two different things. You're trying to synchronize two pre-recorded songs. That feature has been around since the beginning and is fairly trivial to do once you learn some of Ableton's core concepts. That can be demonstrated in countless Youtube videos.

What I'm describing is composing music on the fly that is locked to a master tempo and clock (like most electronic music). Jamming with another musician and staying in sync is easy if they're also using drum machines or other hardware that can clock off the same source. What Ableton's new Tempo Follow feature does is allow the DAW to listen to the audio of live musicians (say a Jazz quartet) and then automatically speed up and slow in real time when they choose to. Now, when the band pushes the tempo my sequenced instruments wont lag behind the beat and get out of sync.

--------

I think what you're trying to do is synchronize the tempos of two songs that both push and pull their tempos a bit. That involves compromise and maybe some automation. Let's use an example of two different kinds of songs which we'll call 1) rigid songs and 2) loose songs.

Rigid songs are like house music or techno. They're clocked exactly to a given tempo that doesn't waver throughout the song unless the composer automates it. This is common in electronic and modern pop music. These are the easiest to synchronize because they don't have any tempo variations and require far less work to tempo match. Loose songs are songs that have subtle or dramatic tempo changes: Jazz, old rock and roll records, or classical music (which usually has a lot of rubato).

What I believe you're asking about is trying to sync two loose songs or a loose song and a rigid song. Well the short answer is that you'll likely have to compromise a bit. Let's say a Led Zeppelin song hovers around 80bpm. At some points the band slows down to 78bpm and other times they push the tempo up to 84bpm before ultimately settling back to a mean tempo of 80bpm. Well if you wanted to synchronize the Zeppelin song with a rigid song you can set the Segment BPM (What you're designating as the average tempo in Clip View) to 80bpm for the Zeppelin song and then use warp markers to snap the song to the 80bpm grid in the parts where the band pushes or pulls against the tempo and deviates from the grid.

At this point you might point out that manually aligning the audio file strictly to the grid compromises the emotional excitement and groove of the Zeppelin song by quantizing the tempo. You'd be absolutely right. It really comes down to what your intention is and how much you're willing to sacrifice that quality for the sake of making two songs synchronize. Preserving that feel then becomes more of an art and involves careful adjustment of the warp markers and using your ear to gauge the results. Often the more musically exciting choice is one that doesn't adhere strictly to a grid.

It also comes down the content of the songs. Without audio examples it's hard to be too specific about what choices should be made.

Does that answer your question?


Yes much thanks. But so to do the tempo-syncing you say I need to "use warp markers to snap the song to the 80bpm grid ...". I would have hoped that this could be done automatically (with the new feature in the new version perhaps).

If it can synchronize with live musicians, would it not be as easy (or even easier) for it to synchronize with a recording of live-musicians?


My impression is that's only true in consumer software and web development.

CAD, rendering/animation, high end photo editing, and a lot of other higher end, professional software are script-able, provide libraries and APIs, and work closely with their end-users to give them what they want.


Yeah I was thinking about Photoshop and Lightroom, which I also use, as other examples of what OP said.

Photoshop is scriptable/extendable and Lightroom has the more intuitive GUI. Apple started the Lightroom paradigm with Aperture, which was kind of the GarageBand of photo editing softwares until Lightroom caught up with the ease of use.


Have you checked out bitwig? I love the help overlay. Explains everything while you can turn the knobs.

Same for their graphical programming thing ("grid" I think). A nice hybrid between skeumorphic and digital with e.g. stylized wires hanging down in gravity, but becoming transparent when crossing components.

Also super nice that most "visualizations" are also interactive (see the direct manipulation of the envelope at the end of the following vid)

https://www.youtube.com/watch?v=DWjYedds-RQ


Long time Ableton user here. I just rebuilt my studio, and decided to use Bitwig as my DAW. I'm still coming up to speed with it, but it's winning me over. Being able to have multiple project open is nice.


I am a Bitwig user, really love that they support Linux. Their platform is very stable and the modulations are imo the differentiating factor.

Ableton is a beast, I haven't used it as much, but I know.


It really depends on your product and your intended audience. Ableton is not built with the intention that any average Joe can jump in and be productive from day one. Compare that to Garage Band, which is dumbed down because it has a different audience.

I think with virtually any software out there, there are tools that cater to both audiences.


I remember using Ableton Live v3 back in the day. Compared to MOTU Digital Performer (which had plugins which could put windows anywhere) and Reason (which took skeuomorphism about as far as you could possibly go) its minimalism and modality of the UI seemed restrictive. But probably very sensible for live performance. Feels like they prefigured mobile user interfaces by some years.


I was blown away when Propellerheads Reason introduced wire patching fully working on Windows XP!! This is one of the oldest videos I could find quickly: https://youtu.be/IC1ejK2chLg. Just to put time into perspective, the introduction of this actual feature pre-dates Facebook!


I don't think people do that because they're scared, they do it because "power users" don't build unicorns in most verticals.


Interestingly, DAWs have always had flat UI designs, something that many HN commentators seem to frown upon.


Reason is quite possibly the king of skeumorphic design, across all software categories and all time.

https://www.reasonstudios.com


For good and ill. Mostly good (I really love it), but treating knobs as weird faders is one place the metaphor breaks and it took me a long time to get used to it.


Yeah, I'm with you. The UI has a sort of fun appeal, but in practice it's not actually that usable when you know what you want to do and want to move quickly.


I don't believe that this is true. FL Studio had a bit of a dated 00's design for a while. Cubase, Reason, and Pro Tools were also not flat either. I'm not sure that "always" is accurate.

Though, Ableton has been flat since 1.0 it seems.


DAWs, maybe, but the VSTs that live in symbiosis with them have had (and a lot of them still do) the most skeuomorphic designs ever, replicating the look and feel of audio hardware.


Always?

I think it's quite the contrary. Look at the history of Logic, Cubase, Sonar, etc. For most of their time they didn't have flat UI. Ableton is the exception.


I remember the first time I saw Reason's cable system, it was so pleasant to use. The software makes you happy. Music software not only does something, it also does something in style, I really appreciate the details given for each little things.


I felt this way briefly until I tried to use it and then I started to understand that physical metaphors have their limits. Ableton seemed like a much smarter piece of UX design.


> Knobs

Can someone explain to me the point of a GUI knob?

Is it just the analogy to physical knobs on mixers that makes people comfortable with them in GUIs?


Knobs and sliders allow users to leverage their spatial and manipulation abilities to specify a value from a continuous range.

You could also ask users to just type in a number, but that interrupts the flow and pushes some other data out of their short-term memory. Composers and audio engineers would probably prefer to keep that part of their brain free to focus on other things.

Sliders can also be laid out in an array (as on a mixing console) to give at a glance a graphical representation of relative levels. Another example is on the classic "two turntable" DJ mixing setup where a horizontally-oriented slider is used to control the balance between left and right.

Knobs' advantage is they are particularly space-efficient.

So you tend to see sliders used for the primary controls in situations where its useful to compare them side-by-side, and everything else as a knob.


To add to this, the precise value of a control isn’t relevant in most mixing and sound design operations, only it’s value-within-range. Most of the decision-making work is done by ear, not by number. The knobs (and sliders) give you a quick reference to “can I get a bit more out of this”.

Virtual knobs (and sliders) invite experimentation where a numeric control or text box doesn’t (so much). You can play with them. The dragging action gives you a little bit of tactile feedback.


Very relevant point. The end result of any individual sound is entirely dependant upon the other sounds accompanying it within the same physical space.

Just as a file isn't only a file, but an entity in a filesystem, a sound is not just a sound, but an entity in a sound environment.


They also serve as a kind of memetic device. To see the current state of things, the user does not have to (1) read a bunch of numbers (2) remember the current state. Instead, they can look at the image (full of knobs and sliders), and utilize the much greater visual processing capacity of the human brain.


Unlike a slider, a GUI knob's visual feedback and skeuomorphic comfort comes at the cost of ergonomic interactivity.

The whole point of physical knobs is to leverage a person's thumb and forefinger (or tennis muscle if it's a huge knob) to twist precisely and efficiently to a desired position. With the two most popular digital input devices that torque doesn't exist-- you can't "twist" a mouse or tweak a touchscreen to change the position of a knob.

At least in most of the audio GUIs I've seen that use knobs, the author seems to have no idea that their skeuomorphic interface actually degrades interactivity. E.g., if they had simply used a slider the user could swipe their finger efficiently to quickly change several sliders in a row.

Even worse-- with a knob you have a similar "90-degree ambiguity" problem as with touchpad scrolling. Do you use dx, dy, or some other algo to choose the relationship between touch/drag and knob rotation?

Even worst, some devs try to "bring the torque back" by coupling the knob movement to the distance of the mouse from the center of the knob. I really wish someone would feed this awful behavior back into the physical world, and ship a mixing board with knobs that only update their value if you twist them using an included set of "knob tongs" :)


two words: scroll wheel

more words: scroll wheel with disengagable "clutch"


The only alternative UI for many of these knobs is a linear gauge/slider. A knob is often modulated and automated so it's not just an input widget, it's also a gauge. It needs to convey an animated value to the user.

Typing is usually out of the question since you need to do it by ear. Some controls obviously have numberic input in combination with the knobs, for extra flexibility (e.g. if you made some notes with settings and just want to enter them).

Many of the values use mappings that make little sense as numbers. E.g. a knob marked 0..10 could mean a filter cutoff frequency from 20 to the nyquist frequency, logarithmically. Entering values in a text box with such arbitrary limits is a terrible experience (is it 20...22000...or 20..48000? or 0..1? or 0..100%?). Knobs hide the complexity of the logarithmic mapping and it's easy to to quickly set a value by ear.

In a few cases the knobs are decorative and could just as well have been e.g. most fixed step knobs could be combo boxes. For example a vintage synth often had a rotary knob to select between 3 options (waveform, for example) so now you can see that in ui's too. Doing that in a computer interface is purely for visual appeal.


It is not only the analogy. I like to use Midi controllers which have hardware knobs mapped to software knobs for live performances.

Sure, you could map a knob to a slider or anything really, but it makes the interaction easier


Sliders are a great fit for a touchscreen. And knobs are basically sliders that take up less room.

I don’t see the problem? If you give me a numeric input that doesn’t allow me to drag or swipe to change values then I’ll swiftly be looking for a different app.


What's a better UI for this? The times when a knob (or slider) are appropriate is when you're adjusting something by eye or ear (e.g., you don't know the exact destination value in advance).


Unlike a number box, they show the range of potential values and contextualize the current value - similar to a slider but more often those are linear when knobs are non-linear. Just my 2¢


It is just skeuomorphism but sometimes musicians bind those knobs to real midi instruments.


Every serious musician I know has a MIDI controller on their desk, each mapped to specific controls on their DAW. A physical knob that maps to its GUI equivalent is really helpful.

This skeuomorphism isn't without cause.


Knobs are just sliders using less space.

You need knobs and sliders to convey information visually.


FruityLoops! But yes, I agree with all of this.


If you liked the old Fruity, you might enjoy this: https://dunkadunka.com


I do like this, thanks!


Cool, I made it just for fun when the corona started and business was slower than usual.


What? Are you saying FL Studio is a dumbed down Ableton? I must be misunderstanding you



music and animation are often very nice to users


> Imagine what Ableton would look like if they were only optimising for the user that wants to put a couple of pre-made loops together and call it a song.

It would look a lot like FL Studio, which caters more towards that end of the market.


FL Studio is quite powerful, though it does seem to cater more towards electronic music production and live sets. I don't think I've ever seen anyone use pre-made loops in FL Studio that wasn't an absolute novice.

GarageBand on the other hand...


You can run python scripts in FL Studio. I think it's more accurate to say that FL caters to midi/piano roll workflows. It's more about style than complexity level.


Dude, FL Studio is so much more complex and flexible than Ableton! So much that I'm switching to Ableton for a while to see if I can get more productive and spend less time troubleshooting stuff

I'd say the opposite: Ableton is more focused and simpler, streamlining lots of stuff (live VSTs) without being less powerful. Kinda Apple-ish in a way.


I don’t know FL very well but Ableton’s power is cleverly designed and is easy to miss when you only use the obvious surface features.


Ableton Live is my absolute favorite piece of software right now. It fills me with a joy I haven't felt since I first learned HyperCard or Photoshop. It's just marvelous. In fact, it's so good that it makes it harder for me to finish music since I keep tinkering on things. But I can just sink hours into it without noticing the time pass.

It's so easy to get jaded with software these days because of freemium apps, dark patterns, social media apps hijacking human psychology to drive up engagement, etc. Ableton Live reminds me that software can be beautiful and empowering, and how good and natural it feels to just pay money for a well-crafted product.


Quite off topic, but: where would a person go nowadays to get up to speed about music making? (Learning, news, etc).

I used to tinker with beatmaking back when fl studio was still called fruity loops, and this post has made me very nostalgic. I would like to get back as a hobby, but it's been like 15 years and I'm quite lost.


IMO, it's a great time for this stuff... but I've been recommending folks who ask me start out with a groovebox, as there are a lot of inexpensive ones that are super fun.... ranging in price from the Korg Electribes, to the Novation Circuit, to the (expensive, IMO) OP-1.

Youtube has a ton of stuff, but it's such a vast topic that even in the realm of something like Abelton there's a vast array of things you could be looking at and it's super easy to lose focus.

I have a FL license from around the turn of the century... I think it still works. You might see if you do and start there.


Some of my favorite youtube channels, in case they're relevant:

* Julian Earle

* Tom Cosom

* Make Pop Music

* edit: you suck at producing definitely makes the list

Youtube's the spot for info, but the CreateDigitalMusic blog has also been an impressive source since FL studio was still called fruity loops


If you like Make Pop Music, but house/EDM is more your thing, EDM Tips (https://www.youtube.com/channel/UC2vitamJHo7kSiiuA598u-g) YouTube channel is pretty similar to the Make Pop Music one. They both even have playlists for "Making a song in the style of [Some Famous Artists]".

EDM Tips is far less in-depth, but it's also more Ableton focused and he almost strictly used default Ableton plugins and sounds.


I went through the same transition about a year ago. I hadn't touched music making for about a decade before that (had kids).

It is kind of overwhelming because the options are huge. Also, we're in a resurgence of electronic music hardware right now, which is super exciting but also ramps up the paradox of choice and analysis paralysis.

However, we have two things going for us: 1. Most music software has free demos or inexpensive lightweight versions. 2. The used hardware market is very strong so you can recoup most of the cost if you sell something (especially if you bought it used).

This means it's relatively feasible to sort of incrementally explore the space and see what works for you. I do think you have to treat it as an exploration. Unlike other music genres, electronic music lends itself to very personalized workflows. The gear and software you have and how you have it all set up is a big part of the creative process. Also, the user experience of hardware and software affects the music you make in profound ways that are hard to predict. You have to just sort of try stuff and see what gels with you.

I'd suggest:

1. Research the artists you like to see how they make their music. You can usually find video interviews with them, often in their studio.

2. Watch YouTube videos for the gear they use and see which things look inspiring.

3. Acquire a piece of it and give it a try. Make sure to force yourself to sink enough time to get past the initial learning curve.

4. If you like it, keep it. Otherwise, sell it and move on. Either way, go back to step 1.

Finding the right balance between just playing around, working to finish songs, and tinkering with your set up is a continuing challenge. Be mindful of it (i.e. don't just fetishize gear acquisition, or grind so hard you take the fun out of it) and you'll be OK.



just want to add my favorite music production youtuber to the list https://www.youtube.com/andrewhuang


couple of ableton made, but completely browser based tutorials (they might not fit the specific topic you are looking into learning but still I think they are cool to check out)

https://learningmusic.ableton.com/

https://learningsynths.ableton.com/


Let me add Ableton's book "Making Music", which is quite well done and relevant to beginners as well as experienced artusts/users: https://makingmusic.ableton.com/


They did it. They finally added comping.

I would be buying the upgrade if that was the only feature but there's so many!


Seriously. That alone has been one of the big things keeping me firmly in Logic Pro's camp.

I'm also glad they're starting to add more instruments to the default package (notably, finally; a proper 'Upright Piano') - as the value proposition of Ableton Standard at $400 vs Logic Pro at $200 has been extremely one-sided in Logic's favour for many, many years now.

I do understand, of course; I'm strictly speaking as a Mac user. I honestly have no idea about the DAW landscape for PC's. I am extremely reliant on exporting songs from GarageBand on my iPhone as I create beats on the go quite often - and opening those .garageband files in Logic Pro to seriously hone in on the sounds, keeping the original instruments and MIDI intact. It's an irreplaceable ecosystem for me.


Speaking of instruments, after purchasing the Native Instruments Kontakt piano plugins: The Giant, The Maverick, The Grandeur, The Gentleman and Una Corda - I can never go back to the instruments provided with the DAW. The NI instruments mentioned above are just so head and shoulder above the rest, imo.

My favorite is The Giant using either the "emotional" or "intimate" preset. Feel free to tweak those presets if you want, but good luck sounding better, it's already amazing.


The Giant emotional preset is fantastic.

Really is miles ahead of any piano sounds included in any DAW I’ve tried.


I agree. The Giant is messed up. So phenomenal. :)


Hard to compare price, Logic is crazy underpriced because it’s existence brings in so many Mac users. It undercuts other daws by a mile because of that.

However, what you get with ableton live suite is well worth the money. It’s good value.

But I’d still always run any DAW on a Mac. No audio drivers on windows beat CoreAudio on macOS.


> No audio drivers on windows beat CoreAudio on macOS.

It's true that generally speaking CoreAudio drivers are better, but then you have RME drivers on Windows which are just as rock solid.


It looks like a good update, but I'm starting to give up hope on seeing ARA2 as a feature... :-(

Something that wasn't a problem for $200 Logic Pro or even $60 Reaper (with their two-person dev team).


I am consistently pleasantly surprised by the way Apple consistently updates Logic with only more and more layers of impressive features. ARA2 support was fantastic - so was their buyout of Camel's Alchemy.

Alchemy alone added a ton of value to Logic, and it came as a .x upgrade. Fantastic.

EDIT: One of the only selling points of an iPad I can think of is using it on a stand with Logic Remote as an actual digital mixing window. The experience is unparalleled.


What is ARA2?


Following on from the link of the other poster, the Random Access part allows plugins like Melodyne to "see" all the audio data that's on the track all at once, without you having to transfer it inside the plugin manually. You just start working with it seamlessly in an edit window. It's a neat idea.



For someone like me who uses Logic, is there a reason to use Ableton? Logic is only one $200 lifetime fee


I’ve used both for many years and enjoy both. They’re largely equivalent in function, but they have very different feels. Ableton is more “hands on” and almost modeless. It assumes that you’re always manipulating and reworking things. Logic is more structured and “modeful”. Examples: Ableton defaults to time-stretching and transient slicing for audio; you have to turn it on in Logic. Ableton’s always in what Logic calls “marquee” mode, so you can grab pieces of audio and move them around. Ableton always shows you all the controls of all the (built-in) plugins; Logic makes you open them one by one. Routing in Ableton is dynamic; Logic has fixed “busses” you have to allocate manually.

Other than style, one killer difference is that Ableton has Push hardware, which is really an extension of the UI into physical reality. It’s a fantastic control surface that has no equivalent in Logic-world.

And of course, with all those features you can create amazing live setups with Ableton—thus the name “Live”! Logic’s comfort zone is very much as a studio tool.

Edit: I forgot to mention the higher versions of Live come with Max for Live, which is an amazing construction set for making audio/MIDI software. No Logic equivalent for that.


I converted to Ableton from Logic (pre logic x) a long time ago, but I still use logic sometimes.

There’s lots of small differences, and obviously you can make any type of music successfully on either, but generally my advice would be:

If you prefer recording traditional instruments & band tracks stay with logic,

If you prefer experimenting with sound design & electronic music in general, try ableton out. Also if you’re a programmer because you’ll enjoy max for live and the max MSP ecosystem.


Seems right. Although I chose Ableton because it's cross platform even if I do guitar primarily.


But then you would have to use Mac, which breaks third party plugins every 1 year or so. Most the stuff I invested in when I got started hasn't worked since Lion. All other things that survived were dead with Catalina. Windows you can install stuff from 2003 and run no issue. Work on ARM lol. Most DAWS don't even support 4k monitors. Reason, MPC, Maschine. Not to mention the countless popular plugins that are the size of rain drop on the screen (Ominisphere)


You can stop using Apple hardware for no additional software cost if you use Ableton.


I dunno if there is one. I get a lot of work done in logic.

I ended up with an octatrack for doing what a lot of folks are using live for, tho, and that's been super fun for me.


To me, Ableton is as much a performance tool as it is a production tool. I used to limit myself to production, then I bought a Push controller and it completely changed my workflow.


True but Apple makes it so cheap because they get their money back on hardware.


What is "comping"?


It's short for "composite" recording. As in, mixing several takes together to form a single one with the best bits of all of them.

Strictly speaking you can already do this using multiple tracks, but having a dedicated workflow will make it much easier.


Multiple takes intermixed on a single track. https://www.ableton.com/en/live/#comping


Recording different takes, and then piece different parts of those takes and create one final "perfect" take.


I love the UX of Live, and especially how it seamlessly integrates with Push 2 - really a joy to use. It does offer a great amount of depth in terms of capabilities for power users, but always refers back to basic UI concepts that can be applied to many different contexts.

What truly amazes me though is the stability of the platform. Yes, it's great for home recording and casual use, but the same application is used as a core piece in live performances with huge audiences. There's not a lot of "consumer" software out there that I would trust with that. Ableton Live is certainly one of those.


Not to thread-jack, but are there any worthy open-sourced competitors to Ableton? I'm just getting into DAWs and was thinking of buying Ableton Live but haven't had the time to explore/compare other options. My musician buddies all tell me not to waste my time and just buy it.


If you want a DAW that is similar to Ableton and works on Linux, you should go with Bitwig, which is in some ways better and more modern than Ableton, but has a smaller user base.


I have actually found the clip/scene feature in Bitwig to be better than the equivalent feature in Ableton. In Bitwig I can set the DAW to record a set number of bars for each clip and then start looping immediately after recording finishes, which is my perfect workflow for building a track. With Ableton I was not able to do this, and instead I would be required to hit 'stop' and then manually crop the clip by changing its length.


I'm pretty sure that you can do this with a Push using a fixed length clip. https://www.musicradar.com/tuition/tech/how-to-capture-the-p...

Can you not do it without the Push?


This is one of the huge blind areas of Ableton, and one that they chose not to address in 11. You do have to use the Push to launch clips of a specified length in live performance. There have been very complicated workarounds built by third parties (BinkLooper). I have wondered if this is all a ploy to force people to buy the expensive Push. At any rate, it has definitely kept me from buying Ableton. It is regarded as a live performance tool by many but the lack of this feature, which they could easily add, makes that regard incorrect.


I sort of agree with you, but Push is awesome and totally worth the cost in my opinion.

Even with a Push, there are a couple of WTF omissions that I would have thought would be easy to add, but I assume every DAW will have a few of those.

Lots of people use it Live, mostly negating your argument that the lack of this specific feature makes Ableton unusable live. Actually, is there an alternative DAW that is used live more than Ableton? I would guess the answer to that is "no".


Reaper is also worth checking out too - but it doesn't have clips/scenes, which Bitwig does. Bitwig is well thought through, though they have some improvements to be made around the ergonomics of integrating with hardware instruments.


What does “modern” mean?


There is nothing remotely close in the OSS world. You might find you prefer a different DAW, workflow-wise, but Ableton is A+ at what it does.


The free software DAW is ardour. https://www.ardour.org/ Unfa has made some youtube videos about (and with) it. https://www.youtube.com/channel/UCAYKj_peyESIMDp5LtHlH2A It does not offer the same possibilities of workflow though, as far as I can see.


I think it would be more accurate to say "it offers a different workflow".

Ardour is modelled on "traditional" "linear" DAWs, rather than the live "play this thing when i hit this" flow that Live has enabled so wonderfully. Not sure you'd want to use Live for large scale multi-tracking, movie post-production. Both will work for smaller scale "traditional" record-edit-mix workflows.


It's works well for arranging, but doesn't have a scene/clip/looping workflow (yet).


Heeey, glad you asked!

I'm not full-time on this, but I'm trying to build an OSS DAW. Here is a first tiny demo: https://timdaub.github.io/wasm-synth/

I also wrote about it: https://timdaub.github.io/2020/02/19/wasm-synth/


I'm not sure what you think a DAW is, but typically it tends to involve the possibility of dealing with large numbers of tracks and vast amounts of audio. How do you propose to do this in a browser context?


At this point I don't propose any design. But I'm confident it can be done.

You seem to know the challenges ahead. Mind sharing?



Just had a lot of fun playing this! Thanks for sharing. I've been working on a wavetable synth VST, but also have been looking at web APIs to do something similar to this. Thanks for the write up.


Hey thanks! I was actually experimenting with web GL to build a similar instrument as Ableton's wave table! But I'm not full time, so it's taking forever...


I know how that goes, best of luck!



Nah, FOSS DAWs don’t even come close. Your friends are exactly right.


Depending on your use-case there are a lot of options to consider before investing in a professional solution like Ableton Live.

- LMMS[1]: The go-to cross-platform FOSS DAW solution, with VST support and a healthy community.

- Ardour[2]: Much like LLMS, it's a classic Open Source project that runs cross-platform and is quite fully-featured.

- VCVRack[3]: A brilliant FOSS EuroRack Simulator. It has a built-in package manager with an enormous selection of officially supported and community submitted plugins, complete with VST and Linux support.

- Waveform Free[4]: By Tracktion, Waveform is a robust DAW with VST sandboxing and great built-in FX.

- Audio Tool[5]: A great web-based DAW, featuring online collaboration. Last I checked, there was no VST support.

- Bandlab[6]: Another web-based DAW by the folks behind Cakewalk. While it lacks VST support, it offers online collaboration and straight-forward defaults.

If you like scripting, Reaper[7] offers a personal license for $60 and supports Lua, Python, EEL2/JSFX, and it's own ReaScript[8]. Reaper offers a highly customizable, robust solution for power-users. While the above projects are some of the more complete projects, there are many more scattered around Github. If you're interested in the more niche/experimental ones, let me know and I'll reply with my more comprehensive list.

That being said, I've been an Ableton Live user for over a decade and I recommend it for anyone who wants to invest into a more serious music project. These days, I tend to reach for Studio One[9] by Presonus for more involved projects. While it lacks the focus on live performance, it is more feature-complete DAW than Ableton. Studio One doesn't have the same friction-less workflow that Ableton is famous for, but it's an absolute powerhouse when mastered. It's very well funded and one of the youngest, so Presonus had the opportunity to adopt the best features of more dated solutions like Cubase, Logic, Pro-Tools, and Ableton, while stripping away a lot of the bloat.

[1] https://lmms.io/ [2] https://ardour.org/ [3] https://vcvrack.com/ [4] https://www.tracktion.com/products/waveform-free [5] https://www.audiotool.com [6] https://www.bandlab.com/ [7] https://www.reaper.fm [8] https://www.reaper.fm/sdk/reascript/reascript.php [9] https://www.presonus.com/products/studio-one/


Do you have a recommendation for someone on Linux interested in producing EDM/House that has zero prior experience/equipment?

I have Reaper and Ardour, but a friend suggested Bitwig.


It's not really about the musical style, but about your desired workflow.

For example, Ardour really isn't designed around the sort of workflows that many people use for EDM/House production, yet it still has a reasonable number of prolific users who use it for precisely that style. Needless to say, the way they work is fairly different than someone using Bitwig or Live.


If you buy hardware they usually throw in software for free. I bought a usb-c audio interface and it came with cubase, and my midi controller came with ableton. They are light versions but they get the job done. It's enough that you can at least compare DAWs.

It's not free but I like to plug renoise whenever I can. It's more of a tracker than a traditional DAW, scriptable in lua. If you are a programmer you might like trackers.


Try Ardour. Open source and works great on Linux

https://ardour.org/


Neither Bitwig nor Reaper is open-source.


"worthy open-sourced competitor to Ableton" is a very tall order, but if we're just looking at open source DAWs, a friend pointed out this one to me recently. https://lmms.io/


DAWs, especially ones like ableton, are one of those types of software where they aren't that difficult on paper but the entropy of one is high enough you need a team of people to get going in the first place


What makes them not difficult? I have coded a small synthesizer with midi input some time ago and I thougt the real time audio processing was pretty challenging. There needs to be almost no latency.

An entire DAW will be much more complex.


That's not the history of Ardour, really. You do need a team of people, however, if you want to make reasonable progress.

Also, I think any "on paper" description of a DAW that makes it appear that they "aren't that difficult" can just be ignored.


And do any of them have hidpi support? When I tried, it ended up being impossible to use for example lmms on an 200 dpi screen


free but not opensource:

Cakewalk, MPC Beats, Tracktion Waveform,...

Usually Live Lite the free edition (8 tracks) is bundled with $50 MIDI keyboards, and is sufficient to get started with music production, recording and adding a few virtual instruments at least.


Have you tried the Lite version? I don't have keys right now but they come with hardware all the time and I already have a Suite licence. On the Ableton subreddit sometimes there's giveaways.

It's kind of like a gateway drug, because you can do a lot of stuff on the Lite version but all the "fun" things like all the bundled instruments and effects that let you use it like a modular synth or the scripting and automation goodness come only on the more expensive version.


> Not to thread-jack

Not thread-jacking at all. I ALWAYS want a sub-thread on open-sourced competitors to any product.


It seems like they say nothing whether they optimised their audio engine. There are a couple of low hanging fruits that they avoid to pick for some reason that would vastly improve the performance. For example they could enable wrapping the VST plugins in their own separate processes. That means the operating system scheduler would be able to distribute the load on available cores in a more efficient way than Ableton currently does. Another is that when you have two tracks using the same send, then all paths that lead to the send with the send included work on the same core that means you can easily run into issues if you use more demanding plugins. Here the solution would be processing the tracks in parallel up to the point where it goes to the send - this could actually be solved by running plugins in separate processes. One more feature related to performance improvements would be ability to freeze tracks just partially and freeze entire groups. I think other features added are great, but they are kind of useless if you can't play anything because the audio engine can't keep up.


> wrapping the VST plugins in their own separate processes.

This isn't an optimization. https://ardour.org/plugins-in-process.html

And also, the kernel cannot schedule more efficiently than Ableton does, because it doesn't know the data dependencies between plugins.


That's not what my experience tells me. If you read on Ableton website:

> Live uses one thread to process a signal path. A signal path is a single chain of audio flow. In tracks where instrument or effect racks are used, with multiple chains in parallel, Live may use one thread per chain depending on how CPU-intensive each chain may be. If two tracks are "chained" by routings, for instance by a side-chain routing, a track being fed to a return track or any tracks being fed into each-other, they are considered dependent tracks and count as one signal path. Any dependant set of tracks will use one thread each.

https://help.ableton.com/hc/en-us/articles/209067649-Multi-c...

The way they allocate plugins to threads is very inefficient and wrapping each plugin in a separate process sidesteps it as the OS takes over the allocation to different cores.


You should read the link I posted (I wrote it).

Ableton's scheduling could be quibbled with, but it's better than the kernel can do.

Ps. I'm the original author of Ardour, a cross-platform DAW. We use a different model than Ableton for scheduling, but not that different. We run N threads (based on a user preference for how many cores to use) and they pull signal paths from a pool, process them, and go back for more. The result is "1 thread per signal path".


I read the link. I think your assumptions are wrong and definitely don't match my use case. I know there is a cost of context switching, but even with adding all that I was getting much better performance. I work with 1024 buffer and use CPU heavy plugins. Let's say that my CPU (i9 9900k @5GHz) can run 8 plugins maximum on one core. Now I have 4 tracks, and each track has 4 plugins. That sounds okay, but as soon as I use a send effect on three tracks, they are being processed in one thread and that expects one core to serve 12+1 plugins. Suddenly I get overload and I can't play anything. If I use jBridge, the plugins on each track will run in parallel utilising all cores and playback is smooth. The cpu usage per plugin is higher because of the overhead of the wrapper, but I am able to much more plugins without breaking up. I think Ableton model does not take into account that you are bound by how many plugins one core can service and if you try to cram as many plugins into one thread you'll have performance issue. The strategy should be the opposite - trying to spread the load onto as many cores as possible.


If plugins A, B and C are present in a given signal path, and ordered [ A, B, C ], then you cannot parallelize their execution. A must run first, then its output is fed to B, which must run to completion, and its output is fed to C. Using a single thread for this is the only sane implementation.

If you have plugins A, B and C spread across 3 signal paths, then you can certainly run them on different cores, which Ableton (and Ardour and almost any other DAW at this point) will do, by virtue on running each signal path in its own thread. The signal path/thread containing plugin A will execute on a thread/core, independently of the one containing B or the one containing C.


Let me try to explain again. In Ableton one signal path executes in one thread. Any track that goes to send or to a group gets added to one thread, so that if you have [A, B, C] on track one and [D, E, F] on track two - they will run in parallel - that's fine. However, if you send track one and track two to a send track with [G], they both run on a single thread, so it becomes [A, B, C, D, E, F, G], if a single core can only process four plugins, Ableton will be stalling. If you run each plugin in a separate process, then [A, B, C] and [D, E, F] will run in parallel and the results will go to [G]. As long as either [A, B, C] or [D, E, F] will not exhaust the single core, the playback will be smooth. The same happens if you group tracks or use side chain. Plugins in the chain will all run on single thread, but they could run in parallel up until the point where they "meet". Simplest way to achieve that is to do that sandboxing or more efficient way would be to improve the thread allocation to be more granular.


The specific example you cite seems to be:

  [ A, B, C ] --+
                |
                +--> [ G ]   
                |
  [ D, E, F ] --+

If Live really insists on running [ A, B, C, D, E, F, G ] all in a single thread, then I agree with you, Live is being sub-optimal at best, dumb at worst. Despite their description supporting this notion that their scheduling works this way, I'm skeptical, because they're just not that dumb (I hope! :)

However, sandboxing is absolutely not the simplest way to fix this. Ardour does not and will never sandbox plugins in processes. It just uses a topological sort and a dataflow algorithm to decide what can be run and when, and would execute [ A, B, C ] and [ D, E, F ] in parallel (or could, if on appropriate hardware), before finally running [ G ].

Moving plugins into a separate process has both overhead and complications, and is basically a crazy way to fix this issue, if indeed it is an issue as described.


Yes this is what's happening in Ableton and by your description I'd love if Ableton did the same thing as Ardour. I understand sandbox is not the simplest, but it is exactly what jBridge does and it gives me better performance in Ableton, so by simplest I meant that Ableton wouldn't have to change how they do processing, as jBridge is just a wrapper - does not modify Ableton in any way.


Again, sandboxing is not done for performance reasons. On the contrary, it degrades performance because of the context switch and thread synchronization overhead. I am saying this as someone who has developed a VST plugin host with sandboxing options.

Finally, please make sure you understand what Paul Davis has tried to explain you. He's a well-respected industry expert.


To elaborate why sandboxing itself can't help performance, let's assume you have an FX chain A -> B -> C and "B" is sandboxed. After you've computed "A", you take the output and pass it to "B". You then have to wait for "B" to finish before you can go on and compute "C". So while the subprocess for "B" might run in another thread, the main audio thread has to go to sleep. See how there's actually no parallelism at play? In fact, you lose performance because context switches and thread synchronization (especially wake up from sleep) takes time.

I think jBridge might actually collect the result of the subprocess in the next DSP tick. This means that the thread can go on to compute "C" because it can take the result of "B" from the previous DSP tick. Now you indeed have things run in parallel. However, this adds additional latency of 1 audio block. If you have several of such sandboxed plugins in a row, this can easily add up.

This is technique is sometimes called the "pipelining" (see "2.3" in https://www.complang.tuwien.ac.at/Diplomarbeiten/blechmann11...). Note that you don't need sandboxing for this. You can just as well dispatch items to a thread pool, so everything stays within the same process.

So I think the speed up you observe with jBridge has nothing to do with sandboxing per-se, but is just a side effect of its implementation. I can't prove this because I can't look at the source code, but for me this is the most likely explanation.


Your assumptions are wrong, because when you run jBridge A B and C are sandboxed. You are essentially running all plugins in its own processes. As I stated in other comments, the performance is improved even with the losses coming from context switching. The effect is I am able to run much more plugins than without jBridge without Ableton having a breakdown and task manager shows better core saturation. I also stated I don't care about latency - I run my projects at 1024 buffer plus plugins introduce their own delays, so sometimes it takes a couple of seconds to hear something after hitting play. That's fine as long as the playback is uninterrupted. I am running very CPU heavy plugins and that bring Ableton to its knees. Other DAWs handle it miles better - especially Reaper. Unfortunately I cannot get used to the workflow in other DAWs so I have to use workarounds. I am not sure why people keep defending this flawed architecture. It's what makes the Ableton community so toxic, but still better than Bitwigs though - if you dare to comment you dislike something about Bitwig you likely get crucified.


Who was defending Ableton's architecture? All I can see are seasoned audio developers (including my humble self) trying to tell you how sandboxing works, how it can't improve performance and how your observed performance improvement are likely caused by something else. So please stop claiming that sandboxing helps performance - it simply doesn't.

TBH, you don't seem to even try to understand our arguments. If a domain expert like Paul Davis tells you something, try to learn from him and not easily dismiss his arguments.


> Your assumptions are wrong, because when you run jBridge A B and C are sandboxed.

Even if that is the case, it doesn't change anything about the point I was trying to make... I acknowledge that jBridge gives you a performance boost, but you're wrongly attributing it to sandboxing. That's all I'm trying to say.


So why do you think there is huge performance boost when plugins are sandboxed and running in their own processes?


PS: here's my own VST host with optional sandboxing and pipelining. Both features are orthogonal. https://github.com/Spacechild1/vstplugin

Supercollider (scsynth) and Pure Data - both I'm contributing to -, have single threaded DSP. Users told me that they can use many more VST plugins when they enable the pipelining option in my extension, which matches your experience.

Again, I can't look at the source code for jBridge. The only thing I know for sure is that sandboxing itself only causes performance loss.


I tried to explain this in https://news.ycombinator.com/item?id=25068976. Read the linked paper if you want to learn more about multithreaded DSP.

To put it short: I think jBridge does “pipelining“ to avoid the context switch overhead. But pipelining can also be done without sandboxing, leading to even better performance. Without pipelining, sandboxing only gives you a performance loss. Sandboxing does not imply parallelism.


Oh please, all replies to you here had facts, or at worst fair, informed assumptions.


"Yes this is what's happening in Ableton"

Are you sure? Because since your theory is based on grouping tracks, it is easy to test, by simply grouping all tracks in a Set with 20 or so tracks that use a decent chunk of CPU. Then all their CPU should shift from several cores into one single core/thread (according to your theory). Of course, being easy I already tested this with a Set that used Live's native devices (no CPU usage change), but I can't test with VSTs right now (none installed). Maybe VST hosting has that problem, but processing of audio summing, Live's own devices and M4L doesn't have any major problem with multi-core.


Yes I did such tests and then compared the same scenario with jBridge.


I think you may be misreading that FAQ page, it mentions sidechain as the reason for using one thread for "dependant" tracks.

Sidechain is not mere grouping/routing like the chain PaulDavis described with ASCII art (different from Sends too), with sidechain the plugin needs the audio from the other track for actual DSP processing, that's why sidechained tracks become a single thread "dependant" signal flow.

That page also mentions Live can use one thread per Chain if needed (for non Live-users, Chains are internal routings inside one single track), so it clearly states Live can use more than one core for one single Track.

Also, plugins like u-HE's use multicore just fine for a single instance, if you disable their own multicore handling (which conflicts with Live's multicore handling).

For Live's own devices, I'm pretty sure multicore works just fine, just tested again with Groups and Sends.

Don't know what problem you have, but it is not simply Live's multicore handling, it is some specific scenario you hit upon.


It doesn't do that. Whatever seems dependant gets lumped into one thread and limited to a single core. Fortunately Ableton cannot limit what plugins do in their own runtime so as you say U-HE multicore functionality works just fine and jBridge can run plugins off of separate processes. Not sure why people are being defensive about it or attack for pointing this flaw out. It's great that you think Ableton works as intended for you, but once you go beyond Live devices and use more professional tools, you'll hit the wall. People who I know hit this as well just moved to Reaper, but I cannot get on with its workflow.


As a side note... thank you for Ardour!!


The problem is reducible to maximum directed cuts in arbitrary DAGs, which is hardly "low hanging fruit." The OS scheduler is a particularly poor choice of solution and introduces a ton of overhead while solving it suboptimally.

Granted you can use a lot of heuristics to figure out simpler solutions since DAW graphs aren't arbitrary DAGs and you can get clever with how you allow them to be constructed, but it's non obvious. That's why parallel performance varies a lot between daw engines.


ha, yeah. No real time audio person trusts the OS scheduler anymore than is totally and unavoidably necessary.


But how Ableton is doing it is worse than OS scheduler, that's the point.


I don't have access to Ableton's source code so I can't benchmark it to tell you if it's doing better or worse than the OS scheduler, but I would be shocked if it was worse. The penalty for even thread synchronization is quite high, let alone interprocess sync'ing. Particularly when it comes to latency.

Sand boxing isn't done for performance reasons, and it's why you can disable it in bitwig. The sole purpose of sand boxing a plugin in its own process is because it is the only way to catch a segfault and prevent a shared library from crashing a host.


It's from my experience. Ableton is optimised to cram as many plugins onto a single thread as possible, which means you are quickly going to run out of processing power when using more CPU demanding plugins, because as soon as you exhaust one cpu core, the project will no longer play in real time. When plugins run from their own process, the OS take care of distributing the load across multiple cores and that lets you run more plugins before it will overload. So I agree that sandboxing is not done to gain perfomance, but better perfomance is an unintended side effect. That shows how bad Ableton scheduling is.


It shows how slow your CPU's single threaded performance is and the trade off between latency and throughout in modern computing more than anything.

My experience from actually writing low latency schedulers in user space as well as the publicly available material - like in Ardour - suggests different conclusions from yours.

Keep in mind that a naive benchmark like "cpu usage" is entirely meaningless. What you look at is round trip latency required for a threshold of underruns/missed deadlines. Threading requires additional latency, and process synchronization even more. While I'm sure you report fewer underruns when splitting off into sandboxed plugins I'm suspicious if it's hitting the same performance as doubling or tripling the buffer size in terms of latency in the first place.


I am using i9 9900k @5GHz so I think it is quite fast. The plugins I use are CPU heavy and you can run limited number of them on a single core. I am not interested in low latency - I am running 1024 buffer, however I would like my projects to play smoothly even if there is latency. Ableton unfortunately does not work well with such use case as it won't parallelise where it could and sandboxing does just that and I can run more plugins, even if it seems less efficient.


For other readers: It is a misconception that sandboxing per-se enables parallelism. On the contrary, it only hurts performance. The speedup observed with jBridge might have other reasons. More here: https://news.ycombinator.com/item?id=25068976


I didn't observe that. I have experienced no difference apart from improved performance.


> this could actually be solved by running plugins in separate processes.

Speaking as an audio developer, you would certainly not run things in a seperate process to achieve parallelism. Instead you would use a thread pool.


Bitwig already does that with VSTs IIRC


Yes and that works very well. You could also achieve this in Ableton by using jBridge (it was intended as a way to run 32bit plugins in 64bit DAW or other way around or to overcome the memory limits of a 32bit process) which runs plugins in separate processes and unintended consequence is vastly improved performance. I remember having projects that wouldn't play at all and played through jBridge without breaking a sweat.


For other readers: It is a misconception that sandboxing per-se enables parallelism. On the contrary, it only hurts performance. The speedup observed with jBridge might have other reasons. More here: https://news.ycombinator.com/item?id=25068976


I started out with Sam Aaron's Overtone, which completely blew my mind and reignited my musical interest. I got to a point where I was scratching the rough edges of it and I didn't want to follow him to Sonic Pi so I moved on to Ableton. It's awesome, leveled up my production game but the one thing I miss is the workflow of composing in a CIDER REPL, totally unbeatable. Guess I could just export the midi ... but I decided to buy a keyboard and learn to play keys.

This is the music I make[1]. The older, more piano-based songs are written in Overtone, and you'll here where they change up to be more textured and electronic sounding when I got into Ableton

1: https://soundcloud.app.goo.gl/FCnh


Minor workflow improvements, but frankly not enough to get me pay for the update. The audio comping is nice. What I'd like to do is being able to run live as a VST inside Cubase, like FLstudio for instance. Cubase handles most of what I need for production and does it way better than Live, Live is more a creative tool than anything else, especially when it comes to complex studio setups with multiple monitoring, it is where Live completely fails.


I'm surprised at how many HNers seem to be into ableton. Very cool!

Just curious: do you guys buy the full version? It's so expensive that I've never been able to justify it. I actually have been using the free version that came with a midi controller I bought a few years ago, and even that has been great.

(I'd rather spend the money on hardware synths)


I wonder if Ableton 11 is already compatible with the new M1 Macbooks.


Be aware, that in reality, for live performance, you have to buy a costly Push device. This is if you want to create live loops of a specified length and have them start playing automatically. They crippled this feature in the DAW so that you have to buy a Push. They could have allowed you to do this without a Push in many different ways. There is a third party tool called BinkLooper which gets you there, sort of, but it's not like what they worked out with their Push. I don't want the Push on stage and I don't want to spend what is basically the price of Live all over again (plus have another expensive thing to have to upgrade again and again).


I don’t find this to be true. The Push is simply a remote control for the software and it controls features that already exist in the DAW. I don’t believe Ableton is hindering other manufacturer’s MIDI or SysEx integration. It’s strength is in it’s specificity and the fact that it is custom-tailored to Ableton Live alone and does not need to be some middle-of-the-road hardware device that also has to make sense with every other DAW.

I owned a Push 2 for 2 years before I sold it, and in my experience there is very little (if anything) it does that cannot be done with a 1:1 analogue in the software.


I haven't yet seen a third party device that allows you to create a clip of a given length, record into it, and have it loop. You can do this in software if you want to sit on your laptop in front of your audience. You can do the BinkLooper workaround if you want a simple external hardware device to interact with things but it's really complicated. I would truly love to find out a way to do what the Push 2 does for launching clips without having to spend $800 for it on top of the Live license. Granted, a Push 2 can be bought for slightly less on sale but it's still a lot.


This is not at all a hard thing to do in Max (and thus Max4Live), and it's also very easy to hook up midi controllers to let you steer it without ever looking at your laptop.


That's what I've been wondering and wanting to explore. Thanks for nudging me n that direction. Maybe I can do it all with Max and Live. Then I'd be up for it all. Have you or do you know of someone who has done something like this?


Hi Joe, you will probably find more resources on the Max side. Lots of people build things like this and talk about them on the Max forums, and then they can be run either in Live (as M4L devices) or aside live (syncing and piping audio in over soundflower or similar) or both. The Cycling 74 forum and the Max/MSP facebook group have a lot of traffic and helpful posts. For whatever reason it seems like those are more active for what you're talking about than the actual max4live pages/forums, which are generally more about using devices than building them. There are some differences when you run something in max 4 live, but they are pretty small and not hard to figure out (i.e. Live owns the tempo, you have access to live DOM, etc). I would highly recommend the Cipriani and Giri books on Max, which will give you everything you need to get your loop controller working just as you want it! It's definitely doable. When I was doing shows with Live, I basically never used the Ableton UI after setting up my device chains and midi maps, all realtime interaction was through my own scripts and objects. Good luck! maybe see you on the forums. :-)


That's what I've been wondering and wanting to explore. Thanks for nudging me n that direction. Maybe I can do it all with Max and Live. Then I'd be up for it all.


C'mon, anyone can easily find dozens of videos that prove otherwise. Also, the Push API is open, anyone can program python remote scripts to access it, that's how apps like Touchable work, completely false you have to buy a Push, don't need hardware nor M4L. Even if you don't know Python you can use ClyphX. And I talk about Live 10.


I'm sorry but this is not correct at all. I know people who were doing just this with Max 4 Live devices sans Push.

If you are at all code-savvy it is very easy to make your own custom rig that does everything everything you need for live performing in Ableton. You can write your own control surface scripts in Python (unofficially, but google is your friend and Ableton said "we know they're out there and we aren't going to try to shut it down") and you can make anything you want be your own customized equivalent of push with Max for Live. You can do anything you want with your clips and loops from either control surface scripts or M4L devices. Ableton itself was originally just a Max patch!

Sure they've made it easier, but it's definitely not true that you can't do this without Push.


The push certainly has a deeper integration. I own a gen 1 (they're on 2 now), but rarely use it. In fact I wish your comment were true: I'd like to get some value out of the push, but... meh. In fairness, I've never been a hardware guy, so it's me, not the push, that's a bad fit.

What did they do to cripple the features of non-push interfaces? As far as I know the monomes, apc40s, etc of the world continue to work as they always did?

Pad controllers are dime a dozen. I've never gotten the impression that Ableton was willfully sabotaging the market there.


They aren't sabotaging the market... just trying to get me to buy a Push, I think. Basically, it's easy to do something important with the Push and you can't do it without the Push. I feel you on the value of the Push... it doesn't have any for me other than to be able to create and record a live loop of a specific length on the fly and have it start playing.


They haven't done anything to cripple other controllers. In fact, they keep adding new classes of controllers all the time -Live 11 now supports MPE.


Push is absolutely optional. A great device, but still very much optional.


Now if ONLY they added things all other DAWs have - say, a mixer where you could see all your attached plugins at once? Try mixing a 100-channel project with a few plugins on most tracks where you have to click on the track first to see which plugins it has.

I'm completely baffled at why they can't just accept some standard designs that have become common practice over time and integrate them into the UI.

(I'm aware of the Options.txt hack which is unreliable and I'm not even sure it works with the latest Live 10)


I’m actually quite impressed with how Ableton came up with a UI that didn’t exist about 20 years ago and then stuck to it without trying to be all things to all people. The hardest thing to do is to say no.

What’s even more impressive is that they did not start out doing any user research. They made what they wanted and (pardon the pun) it struck a chord. If they had not had the courage to trust their instinct, but had done what so many designers do and focus on satisfying only the needs users can imagine and articulate, I think they would have made yet another Cubase or Protools lookalike. That’s a crowded market and they wouldn’t have made it.

Sure, I understand and agree with your criticism, but there are probably great tools that do what you ask for cleaner and better because those tools are designed around different ideas.

If they can squeeze this in and make it work: great. If they can’t, that’s okay too.


That's all understandable and I agree that they had the courage to go the 'other direction' and succeeded in doing so. And that's part of what makes Live special and what makes it fun to use, which I do on daily basis.

But for professional use, you do end up with a hundred or a few hundred track lanes in your projects, and mixing and tweaking the whole thing becomes an unavoidable disastrous chore.

If I understand this correctly, they planned on introducing it in Live 8 (hence a hidden flag in Options.txt which sort-of enables a preview of this feature but is extremely buggy), but it never fully worked out because they also introduced Racks with nested tree-like flow of plugins, which doesn't play well with a conventional matrix-like mixer view. That being said, I believe most pro users would be happy with a simplified solution like "if you fork a chain in a rack, its branches won't be shown in the mixer", which would cover the 95% of other simple use cases like, seeing where your compressors are at a glance without having to click on each track a hundred times...

There's many such things - like proper multi-screen support (e.g. with 3+ screens), which, hypothetically, could also couple well with a separate-window mixer view where they could introduce chain branching and all that (and become the first DAW to do that!). For pro use, there's lots of features like this that may sound boring but end up hampering your workflow a lot. It may not be as fun-sounding as 'yet another cool phaser bitcrusher spectral distortion plugin' but the reality is, professional users that they supposedly aim at do require core functionality that's been missing since the start.


I love Ableton, but for mixdowns, you're right, the UI paradigm is awful. I use Ableton, Max, and Reaper to have the right tool for the job. For mixing, Reaper is the bomb. I know of a lot of pros who prefer it over everything, regardless of cost. It's just the most flexible mixing tool out there.


I believe it may have to do with the fact that Ableton is a performance tool first and foremost. It would follow that all design decisions stem from that requirement. Yes it is a full-featured DAW like Logic or Pro Tools, but Ableton differentiate themselves by supporting that paradigm and that leads them to configurations that are unconventional for non-performance-oriented DAWs:

1. Having all instruments and plugins in a horizontally-oriented trough allows quick drag-and-drop rearrangement. So if you want to rearrange their order very quickly you can do it. In Pro Tools it would be a two click operation at best and depending on how your plugins are arranged it could be significantly more than that.

2. It allows you to reach the knobs on their native instruments and plugins without the need to open/close plugin windows first.

3. MIDI mapping becomes much quicker and intuitive when you can simply toggle the MIDI/Keyboard mapping overlay for a given plugin.

4. Their Instrument Rack/Drum Rack/Midi Rack devices are very flexible in their capabilites. Being able to nest plugins within them like macros and then quickly hide or reveal a dozen plugins within a single rack is great for declutterring an otherwise busy interface.

As someone who mixes most of their projects in Ableton, their Rack paradigm allows me to condense a lot of parallel processing into a single instrument/audio channel. This makes for a very clean representation of very complex processing. To do a similar task such as having three channels of parallel processing on a single channel in Pro Tools involves either sending them to as many return channels, or creating duplicates of that channel which all have their own parallel processing chains. This creates way more clutter on the screen.

That said, you CAN still have Ableton's plugins displayed in the manner you prefer. It's a hidden feature that's not documented. Pretty easy to setup, but not discoverable. It's an option that they developed but decided not to include in the final version. Here's a tutorial: sonicbloom.net/en/ableton-live-insider-tips-options-txt-part-1/


> That said, you CAN still have Ableton's plugins displayed in the manner you prefer. It's a hidden feature that's not documented. Pretty easy to setup, but not discoverable. It's an option that they developed but decided not to include in the final version. Here's a tutorial: sonicbloom.net/en/ableton-live-insider-tips-options-txt-part-1/

I've specifically noted above, expecting this reply: "I'm aware of the Options.txt hack which is unreliable and I'm not even sure it works with the latest Live 10". It does NOT work properly and it's unsupported/unmaintained. You get misaligned circles, up to 4 per channel, instead of plugin names, because redraw logic is broken.

> Having all instruments and plugins in a horizontally-oriented trough allows quick drag-and-drop rearrangement.

That's true in most DAWs. E.g. in Logic, you just drag plugins around, either in the channel strip which would be always open for a given channel, but ALSO in the mixer view. Try dragging plugins across channels in Live and see how that goes for you.

> It allows you to reach the knobs on their native instruments and plugins without the need to open/close plugin windows firs

Same in Bitwig. In Logic, you can't but instead you can e.g. see all (native) EQ curves for all channels simultaneously in the mixer view and access it without having to select the track first.

> MIDI mapping becomes much quicker and intuitive when you can simply toggle the MIDI/Keyboard mapping overlay for a given plugin.

Same in Bitwig. In Logic it's a few clicks instead of one but the idea is the same.

> Being able to nest plugins within them like macros and then quickly hide or reveal a dozen plugins within a single rack is great for declutterring an otherwise busy interface.

True. But that comes at a price of those chains and effects being even LESS discoverable when it comes to mixing. Since now a particular plugin may be buried deep two layers down in a rack of racks.


I have a love hate relationship with Live. Its 95% exactly what I want, and there is so much great hard and software integration. But beyond making a few cool loops of stacked elements I never seem to make a song with it. I CAN make music with other apps, so I just dont jive with Live. However, I feel like I just am perpetually 1 'aha' moment away from it being my go to DAW, so I keep upgrading it. So yes, I just bought the upgrade to Live suite 10 which gives me 11 for free...


Something I find interesting is that they're adding built-in entropy to this update. You will now be able to set the probability that a MIDI note will fire in a melody. Or set a range of probabilities for the velocity value of a MIDI note. I'm sure that you will be able to map the range probabilities to the ADSR of synths etc... I'm curious if that is more of a novelty or will actually be a valuable tool for exploring ideas.


Do you get to define seed values? I'm just thinking in terms of reproducibility if you are composing a track instead of playing live. A super cool feature I didn't even think would be helpful before


People seem to love this probability trigger feature in Elektron products.


YES. The Elektron sequencer is the first time I ever felt stepping out of a computer might make for a more powerful, fun sequencing experience. It's a joy to use.


Probability-based triggers are one of my favorite things, and something that I thought was pretty lacking here (I only came to Ableton just recently, but nearly every arpeggiator I've used up until now has had a note probability feature and it does wonders for variation in patterns).

So I'm extremely glad to see it being integrated.


> Or set a range of probabilities for the velocity value of a MIDI note.

The velocity MIDI effect already let you randomize all note velocities by a user-controlled range. This just lets you set random ranges per-note.


I haven’t recorded for years, but when I first used Live I fell in love with it instantly. Two of these features (tempo following and velocity chance) are compelling enough I may get back behind a guitar. They were ideas I had years ago and considered trying to build myself but I had no idea where to begin. Good showing, Ableton!


I absolutely love playing around on Ableton, and this update has me excited for those randomization/naturally themed effects they're adding.

One thing I love about sound design is playing with randomization and effects to make really quirky sounds, and those new devices and features are gonna be a blast for me.


Huh, cool. How long before those spectral tools are as cliche as Beat Repeat, though...


I have version 9 and have difficulty figuring out how to do simple things with it. Like how do I take an existing MP3 recording and make it one of the tracks but synchronize the beats with those of the existing tracks?


Drag the mp3 onto an audio track, and turn on / adjust the warp settings to set it to the correct tempo.


Thank you for the advice. But even that sounds difficult for a novice like me. How do I "adjust warp settings" so that two tempos are the same? And I don't want to just have the same tempo, I want to align specific beats.

I know there is a forum and documentation but if all my time goes to trying to figure out technical solutions then all my creativity goes into finding technical solutions, rather than solving artistic problems.


> if all my time goes to trying to figure out technical solutions

Any musician has to learn their instrument. Commit to that.


> Any musician has to learn their instrument

I don't have the time or resources to 'commit to that". Ableton Live is NOT my instrument, I would like to just use it as a tool.

It takes years of practice to learn any musical instrument and become a musician that other people would want to listen to. Maybe the same applies to Ableton? It takes years of hard practice to master it? If so it would be proper that their sales literature would be up front about it. You will need years of study to master this instrument in order to produce something that other people would want to listen to. You will need to commit to it :-)

The difference between Ableton and "ordinary musical instruments" is that Ableton runs on a computer, therefore its use should be easy enough to not require years to learn.


I mean, it takes years to be a "good programmer" as well as years to be "good at photoshop" as well as years to be effective with any professional tool.

You are expecting Ableton, a professional tool, to act as a toy.

Spend $100, Take a course, or watch some tutorials in the styles you want to make, and you'll build to proficiency.

Tools take time to learn. Toys less so.

I found that Ableton push was actually extremely helpful to take my understanding of Ableton to the next level. I then sold my push, and am more proficient with the software now.

If you don't have the "muscle memory" of other DAWs then ableton will be a new paradigm and absolutely hard to use and understand. Esp the difference between midi and audio.

Edit: I had a long day, so I might sound harsher than you expect, but the core of my message is: learning new powerful instruments is hard. Best of luck learning ableton, it's great. Go make some drones: https://www.youtube.com/watch?v=QdRcpRxYJK0


Ableton is easy enough to use that you can have fun with it immediately. I think I could have worked out your question about syncing an mp3 within the first hour of using it. Warp is on by default for any audio you bring in.

If you're not prepared to spend some time learning how to use it, then maybe the whole thing is not for you. Most people enjoy the whole process.

You are being unrealistic if you think that just because Ableton runs on a computer means everything about it should be automatic. Ableton is a professional level DAW and as such has a lot of complexity under the hood.

Ordinary musical instruments only have a few knobs and take a lifetime to master. I don't know why you would suppose it should be any easier to master Ableton which has literally thousands of knobs?


Agreed with siblings that Ableton is a powerful tool and powerful tools aren't trivial to learn. Nobody expects to come in proficient on day one for ProTools, or Avid, or SolidWorks, etc.

That said it's totally fair to not have the time to commit to learning a complex tool. There are other, more accessible DAWs out there you can use if you don't need the power this one provides. Garageband, for example, is easy to use and straightforward. Curious if folks know other ones similarly easy to use.


The advice about treating it like an instrument is good, IMO

I play a bunch of instruments and have been on quite a few recording, but I had to spend the last couple of months learning a novation circuit and an electron octatrack for a project I am working on.

TBH, it took me about a week with abelton and a DVD course to learn enough to be fairly productive with it... that was back when there wasn't much on YTube.

These things are instruments and require just as much work to learn.


I hear you, Ableton actually has pretty great tutorials in the program itself that can help to learn the software as well.


I understand that and not saying it is not great, just that it seems to be more difficult to master than I expected. It can be fun to tinker with and learn if you have the time. One problem I think is that the documentation is scattered there's not single Bible to read to learn all about it, is there?

In the world of programming languages there's a lot of books called "cookbooks" - recipes for how to do things that many programmers will need to do. Say "C# CookBook". Is there such a book or resource for Ableton? If not maybe there should be. It might be a best seller :-)


> there's not single Bible to read to learn all about it, is there?

There is. The reference manual:

https://www.ableton.com/en/manual/welcome-to-live/

It is incredibly well-written. Possibly my favorite piece of technical writing ever, and am myself a technical author.


https://www.adsrsounds.com/product/courses/ableton-live-10-t...

I used one of these a few years ago for an earlier version. Was helpful. I suspect this one would be just the same.


"I understand that and not saying it is not great, just that it seems to be more difficult to master than I expected. It can be fun to tinker with and learn if you have the time. One problem I think is that the documentation is scattered there's not single Bible to read to learn all about it, is there?"

Just to commiserate, I feel the same way about the pedal steel guitar :D


Just like a violinist must spend time with such boring things as tuning the instrument, replacing strings, caring for the bow, learning correct posture, playing scales literally tens of thousands of times... etc.

Software is just another tool, like a painter's brushes or a musician's instrument.


What you want to accomplish here is one of the most basic tasks in Live. If you can’t commit to learning at least this small thing you might as well not even attempt to use Live.


> If you can’t commit to learning at least this small thing

The point is I don't know it is a "small thing". I don't even know if it is possible. I don't want to commit to learning something that may not be possible (as far as I know at the moment). And I don't know where I should even start.

Can you post a link that explains how to accomplish this "most basic task"?


The official Ableton YouTube channel has dozens of very short tutorials. It’s helped me a lot.


As a drummer the follow feature looks super interesting and useful!


What are the upgrade prices like? I might like to buy it, but I don't want a huge bill for upgrading every major version.


$229 for Suite upgrade. I love Live, but it’s getting harder to keep justifying additional money into it as a hobbyist. I might give Logic another go (purchased years ago and still getting free updates), even though it has always felt less intuitive to me than Live for whatever reason.

Edit: I’ve seen other folks online posting a price of $183 for Suite upgrade, so I would just login and see what price they give you. I think I came from Suite 9, so maybe that accounts for the price difference.


This is why I ultimately switched over to FL Studio. I don't make money off of music, and with work and family rarely have the time to spend on it. Live's high upgrade costs seemed wasted on me. With FL Studio, it's a single lifetime licence. Mind you the UX is not nearly as nice as Live and it's much more geared to in-studio producing than live use, so obviously won't work for everyone.


Wow that's a lot for Suite, specially if you shelled at full price. I hope there's discounts once the hype goes away.


I loved Live when I had a chance to use it, but to quote the professor I had at the time - it's "so bloody expensive".

As an amateur, it's extremely hard to justify the doubling of the price of FL. (The $100 edition of Live is limited to 16 tracks so don't even bother.)


Standard Live 10 -> 11 is 159. It's nested behind a bunch of links.


The Spitfire Audio additions are a really compelling competitor to the Vienna Symphonic Library.


Not even close. VSL has many more instruments, orchestrations, articulations, and mics than the instruments bundled in Live.

Don't get me wrong, these are nice additions for Live users, but VSL is simply one of the top orchestral libraries.


True, but it feels like it provides something that scratches at VSL's surface.


wow, I'm so hyped for the improved tracking capabilities


MPE?


MPE stands for MIDI Polyphonic Expression and describes a standard for encoding multi-dimensional values into a MIDI note/event. Typically with a normal MIDI controller, you have values like key, volume etc on an event level, and then channel wide values like pitch bend.

With MPE, you can think of it as each note having its own channel, with added dimensions like pitch bend, vibrato, timbre and more all encoded into a note.

This allows you to interface with new kinds of controllers like the Roli Seaboard for example, which are much more expressive than your standard midi keyboard.

In terms of Live, MPE wasn't supported until now and connecting an MPE controller to the DAW was a massive headache and super hacky.


Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: