Hacker News new | past | comments | ask | show | jobs | submit login

> I find that aggregation ends up around 30ms, which is on the higher end of acceptable when monitoring in-ear

I haven't done that kind of work, so for video/speech, fair enough. 30ms (on top of whatever's already there) is not really acceptable for a musical performance, though - the performer will be expending half their brainpower trying to mentally align what they hear through bone conduction vs in ear monitoring and adjusting their performance based on outdated/scrambled information. It's a really confusing experience - if you've ever seen that Japanese "speech jammer" device[0] you'll get some idea what I mean (although it uses much longer than ~30ms, that's still enough to mess with you).

Working in a pinch, it would make a useful and more stable tool than ASIO, though I run either analog or dedicated devices for any live work - I just don't trust computers that much - but I get that working with video means you might not have those kind of options. I can't say I've had many of the troubles I've heard about with ASIO to be honest, but I undeniably do enjoy working on OSX more anyway. This conversation reminds me how badly I need to revive my ML Hackintosh..

[0] https://www.youtube.com/watch?v=USDI3wnTZZg




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: