Hacker News new | past | comments | ask | show | jobs | submit login
Android Audio Latency (androidpolice.com)
102 points by kawera on Dec 4, 2015 | hide | past | favorite | 83 comments



It is really an interesting aspect of the whole Android versus iOS "war", that indeed Google did not up-front plan to make the OS actually useful for realtime Audio/synthesis/processing; but instead, Android has had more of a 'data-center'-like design, wherein the kernel and its onboard assets is more of a 'managed host' than it is 'realtime embedded' sort of system ..

That the Android designers are having to play catchup with this speaks a lot, I think, of how significant design-decisions were influenced by the territory from which the originated - the web, managed systems - and the realm which they hoped to capture - the embedded system in your pocket. Android was not designed up-front to be an amazing media system; iOS, was. Thus, iOS has more synth-plugins and a vibrant, thriving Audio-Application eco-system (seriously, its million-dollar) whereas Android has a lot of new players, but so far: not such a great playground, really, for "Musicians".

In fact, while iOS has really impinged upon the Music-Instrument (Digital) industry, Android has yet to make a dent.

So it is an interesting time to be having new solutions to what have been, relatively, frustrating and troublesome aspects of Android realtime Audio, as a developer.


Yeah right out of the gate iOS was the clear leader, but I think that's the legacy of Core Audio from their main OS?

To me it's not surprising that iOS was the go-to for audio, based on historical prescedent that both MS and, from what I understand, Linux platforms didn't really have strong native audio processing for low latency. I mean, yes, it was totally do-able (MS via ASIO4ALL, I know there were a few Linux based music systems even that ran on Atom netbooks), but plug-and-play wasn't quite the same. Even audio cards had some driver issues in the MS realm, speaking from personal experience.

It's rather sad because the MS environment allowed for a lot of homebrew plugins - the VSTs. Those won't run in Apple's ecosystem. Other plugins do (ex. UAD) but for the longest time those required (still do?) their own hardware processing cards. $$$$$$$.

iOS bridged the gap in a lot of ways between the 'average' and 'pro' musician being able to access nice digital sounds on reasonably priced hardware. I'm a genuine PC guy (Ableton Live, Reaper) but get a lot of mileage out of a second gen iPad 2 - specifically Propellerhead's Figure software, which is flat out fantastic. I've made an entire EP in GarageBand while traveling.

Sure, some devices won't work with iOS devices (my LPK25 will run on the iPad 2's port via camera connector USB dongle, but the MPD8 won't), and yet some are outstanding (looking at the Line6 / iK media products for guitarists and amp simulation). There's so much of a head start with iOS that honestly I kind of think there's little to no point in Android or MS trying to play catch up in the mobile sphere. MS has a winner with the Surface line because they run real software, but in the 'pure' mobile space, iOS has all the goods in my opinion.


Given the eye-watering disparity in gross revenue generated from iOS vs OS X, you really have to wonder which one is their "main OS" at this point...


Oh no doubt, I probably should have termed it 'legacy' OS considering the priorities these past years. After the 'prosumer' pivot of Final Cut Pro that angered a vocal group who had spent tens of thousands on workcenter setups, to the 'single port' type design on the new laptop, I'm pretty sure a lot of the die-hard creative types aren't going to feel really comfortable they're being considered for future releases. I honestly don't think Apple cares, because those users are a minority anyway.


Your MPD8 should work with an iPad/iPhone with the camera connector if you attach a powered USB hub to the camera connector first, and then attach the keyboard to the USB hub... I've done that to get my Novation Impulse to work with my iPhone.


Whoa, didn't think of that technique! Very cool for you to share and thank you very much.


I would more say this was just an aspect that Google inhereted by going with Linux ( see also https://xkcd.com/619/ )

iOS inhereted a strong foundation for this as OSX has a history of strong support for this sort of content creation. Linux does not.


I don't personally think this can be blamed on Linux - Audio works on Linux as well as OSX does, when the hardware is selected and configured, as it is on OSX. Linux has alternative and competing - and thriving - audio subsystems, and this means that systems integration has to pay attention to what its doing. This didn't happen in Androids' case; where other Linux distros had rock-solid and professional-quality audio subsystems and software integration, Android had an NIH implementation that ended up having to re-invent the wheel, to boot.


> where other Linux distros had rock-solid and professional-quality audio subsystems and software integration

Such as? Remember we're talking Linux in the 2005-era (~2.6.10-ish), not Linux today. JACK was still fairly new at that time. ALSA was still mostly broken at the time. Most stuff still wanted OSS. Basic "does sound come out of the speakers?" was often broken, and god help you if you wanted two things to play audio at the same time.

And you still need to solve the driver problem, which JACK, etc... don't help with at all.


> And you still need to solve the driver problem...

Um. If we're talking about 2005, we're talking about Android.

If we're talking about Android, we're talking about brand-spanking new hardware on a brand-spanking new phone.

This means that the audio driver didn't exist, which means that it was being written from scratch. This makes the "driver problem" a non-issue.

> Remember we're talking Linux in the 2005-era (~2.6.10-ish), not Linux today. ... ALSA was still mostly broken at the time. Most stuff still wanted OSS.

I never tried to do any "pro audio", [0] but that's not how I remember it. ALSA worked just fine. Anything that wanted OSS worked well enough with ALSA's OSS-compat API. I can't remember if the software mixer was around or not at the time, but I remember that when it did arrive, it eliminated that problem with single-stream sound cards.

[0] But -as I remember it- most folks doing pro audio used specialized, standalone hardware for it back then.


4Front OSS was the professional low latency audio choice


Noted.

However, none of the folks I knew at the time who doing professional recording were using PCs or Macs to do the recording and mixing, they were using standalone gear.

It's entirely possible that the folks I knew were not a representative sampling of all sound engineers.


This is basically correct. Android replacing the Linux graphics and UI stack turned out to be the right thing. Multimedia, not so much.


I think you've got it backwards. Audio latency on desktop Linux machines is actually lower than on OSX these days, and prior to Panther the audio latency on OSX was much higher.


You are reading a lot into it.

Yes, Android's designers had a very different mentality from the traditional RTOS-for-a-handset designers. But that does not mean they had a "data center' mindset. Indeed one of the most common error or people who do have a "data center mindset" is that they load up on libraries that do various useful things for their Android development and then blow out the method and heap limits.

Understanding Android requires a unique design and implementation mentality that is neither RTOS oriented, nor "small *nix in a phone" oriented, nor server Java oriented. It's a clever multi-processing managed language runtime with aggressive memory recovery that's capable of running many managed runtime processes, none of which can hog memory globally.

Without having been designed for real time audio from the beginning, retrofitting that capability was hard to do. Apple made a lot of hay out of lack of real time audio processing. But only in one domain of apps. These are the kinds of trade-offs that get products out the door.


Was iOS evolved from the iPod OS ? a lot ? a little ?


No- iOS is a stripped down OSX. iPod classic os was not related to that. The Core Audio framework is what makes iOS low latency. It was developed after the iPod.


I have no idea of the internals of the iPod. As I understand it, iOS benefited immensely from the work already done on OsX. I don't even know whether they planned for it or just benefited from a mature module already in place.


IIRC the very first iPods actually had licensed much of the under the hood software.

I could be confusing them with Palm OS, which did the same for the kernel of their first Pilots...


iOS is just a rebranding of what used to be iPhone OS.


You also see that in the effort that Apple makes for providing game development tools and libraries, whereas Google doesn't even provide proper editor support for shaders.


OK, what about the absurd latency for responding to hardware buttons on headphones and the like? My experience across 3 different Android phones has been that when I press the button on my headphones to pause whatever I'm listening to, it will respond about half the time, and when it does respond, the latency is randomly somewhere between 1 second and ten minutes. This is consistent across several sets of headphones, both wired and Bluetooth, and even true of the pause button that appears on the lock screen, which I'm assuming uses the same mechanism to signal the app.


I feel like nobody cares about responsiveness anymore. (If they ever did.) My TV takes 20 seconds from pushing the power button before it's ready to show a picture. My Bluetooth headset takes a couple of seconds after flipping the power switch before it actually powers on. My iPhone (and I'd really expect better from Apple!) often takes 5-10 seconds to respond to a press of the Home button.

All over the place, apps and devices get faster and faster, and take longer and longer to actually do what you tell them to do.

</rant>


I largely agree with you. Its funny that "low feature" devices have such an instant response to input while modern devices running on processors that are thousands of times faster are slow and laggy as hell. The only major speedup I've noticed has come from advances in hardware tech - SSDs, faster RAM, GPUs, etc.

I feel like responsiveness, low-latency and high-performance especially when it comes to UI stuff is where it comes down to taste. Frankly most people are unlikely to be bothered by a laggy UI. This applies to programmers as well. IMO, the only people I would rely upon to write performant (consumer oriented) code primarily exist only in the games industry. I feel like being forced to "do more" on a hardware platform that is going to be static over the course of the next five years self-selects people who are good at saving cycles and extracting performance.

You can't expect a hardware company to write good software - All smart TV's and such are going to suck big time. The IoT is going to be a goldmine with outdated libraries, unpatched vulnerabilities and other goodies. But you should expect more from professional software developers. Unfortunately, I can't think of the a single Google product that I used that was "highly responsive". Certainly the ones that I use regularly - Chrome, Gmail, Maps, Search have consistently all gone downhill in terms of memory bloat, cpu consumption and responsiveness. I haven't used Android recently so I can't comment, but the last time I had an android phone it got replaced with an iPhone 4S within a week.


For real. It's more like a minute on my TV. And the response to the remote buttons is more sluggish than any computer system I've ever used.

Software developers are to blame. An accumulation of people calling code they don't understand, ignoring constant factors, depending on Moore's law, sacrificing performance to save a little bit of effort.


There is some more to it though, in that simplistic programming leads to do everything at once instead of only what needs to be done to get a response and relay information back.


I'd have to disagree on that. I have a note 4, rooted, stock other than that, and I just tested from unlock to opening google music to hitting play, and it took mere seconds the whole way through. I doubt this has anything to do with me rooting my phone, but I would also add that I have a large amount of applications installed on my phone, as well as several background processes that I run continuously. Not only is my phone fast and responsive, but I have irregular services like complete Linux installer running Ubuntu for arm, pushbullet, tasker, unified remote, and servers ultimate pro. And when I hit play, it plays immediately, I've had lag before, but that was due to latency on the network and not the application itself. Just a alternate point of view :)


> I feel like nobody cares about responsiveness anymore. (If they ever did.)

This makes me wish BeOS had survived.


Actually, a modern Linux install (if lean) is incredibly responsive. That was one of my major motivations to switch.

My 2012 MBA boots in 2 seconds from power on to login. And it's a pretty humble device that is almost 3.5 years old.


TVs are digital had come with all sorts of other features that require processing. This used to be handled by "secretly" leaving them on all the time. This would waste a decent amount of power, and as brands seek better power ratings they actually turn off now. It's the price you pay for HD. I also suspect that in general Bluetooth headsets always had a similar startup latency.


I don't buy it. Nothing about "processing" or HD means it has to take twenty seconds to boot. It's the price you pay for TV programmers thinking a full-sized Linux counts as an embedded OS.


Given what I understand the power of the machine inside these TVs to be, there's no reason a large-enough Linux shouldn't boot in a few seconds.

The manufacturer could get even smarter and use something like CRIU [0], along with opportunistic hybrid suspend [1] to dramatically speed up start times.

[0] http://criu.org/Main_Page

[1] Which would work like this: A little while after you power your TV off, it suspends to RAM, and disk. If you power your TV on, the system TV restores its state from RAM. If AC power goes out, when AC power is restored and stable, the TV performs a background operation that uses the on-disk suspend image to get back to its hybrid suspend state.


The problem is that these features don't work out of the box. Any feature is going to be judged by whether the increased time-to-market and R&D cost is justified by additional sales. It's worth noting here that nobody buys TVs based on how quickly they turn on.


> The problem is that these features don't work out of the box.

What?

No feature works out of the box. Engineering effort has to be expended to make something work when designing and building any new product. Did you maybe misphrase what you were trying to say?

> It's worth noting here that nobody buys TVs based on how quickly they turn on.

[citation needed]

A big part of my criteria for purchasing a display device is device and control responsiveness.

My non-technical family members who care about TV made "time between the time you turn the thing on and time it becomes usable" their #3 priority when purchasing their second "smart TV". [0]

It's pretty clear that many (most?) techies are really disappointed in how shitty, slow, and poorly designed most "smart TVs" are.

[0] #1 Picture quality. #2 Overall value per dollar.


Bullshit. My computer monitor takes a fraction of a second to power on. Why should a TV be any different?


If we trusted our devices not to spy on us, it would be a useful feature for a Smart TV to start up but leave the screen off when it detects someone walking into the room, so that by the time they find the remote, sit down, and actually press the power button to turn it on, It just has to turn the screen on and appears to start instantly.


On the other hand, personal computers are tremendously more responsive than they used to be. Windows 98 was not fun to use once your computer had been on for a while.


What could possibly be wrong with your iPhone ?!


Yep, this is beyond frustrating. Especially as on numerous occasions I've hit the earphone pause button, had it do nothing, pull out my phone, hit pause, unplugged my earphones and THEN the signal finally arrives, UNpausing my music for all to hear.


Same here. Whenever it acts like it's not responding to audio controls, I make sure to unplug headphones and turn the speaker volume down to nothing before I walk into work in the morning.

Similarly stupid response when unplugging headphones:

1) Oh, he unplugged the headphones!

2) Let's play that audio out of the speakers instead! At whatever volume the speakers happened to be at last time he used them!

3) And keep playing it for a half second or so!

4) I guess he might have wanted it to stop when the headphones came out? Fiiiiine.


I've never experienced any delay in pausing when unplugging the headphones. I actually use it sometimes as a quick way to pause music on my phone.


I run stock Android and the disconnect of headphones or bluetooth promptly pauses pandora, google music and pocket casts.

Google music does have the habit starting music playback when connecting to bluetooth regardless if the app is running or it's been a week since connecting to bluetooth or the phone has been rebooted since connecting. It just spams my car stereo with music AFAICT.


Mine's not quite stock Android, so whenever something doesn't work right I'm never sure whether it's Google's problem or Sony's.

Another audio bug: when Google Maps gives audio directions, it pauses my podcasts. Half the time it resumes playing afterward, half the time it doesn't. I can find no pattern to this.

PocketCasts and Google Maps are both pretty well polished programs, and I have no idea whether I should blame one of them or blame the OS. Ah well. At least I got an SD card slot instead of paying $200 for 48GB more storage.


I've been using Pocket Casts and Google Navigation together for a while. There is a setting in pocket casts to play over notifications and navigation prompts. I had to switch to this a few months ago because navigation prompts started getting muted for about 1 second, then they became audible half-way through and then the transition back to the podcast worked fine.


Just re-tested my headphones unplugging problem, it happens in both DoubleTwist and Pandora. Not every time, but most times.

Phone notices headphones are unplugged, phone switches to blasting over the speakers for a moment before pausing it. Really stupid problem.

I should probably wipe the thing and start fresh.


Many car stereos actually send the play signal when they connect to a device. My Alpine unit did this, as does my Pioneer AVH-4100NEX. Also guilty: 2014 Ford Focus ST and Ford Fiesta ST I tried, as well as a late model Mercedes ML500.


Same for me, except that sometimes that last step happens 5 or ten minutes later, in the middle of a meeting or something. The worst part is that a button labelled "pause" can trigger an unpause action. It should really remember what the button said at the time it was pressed. I would mind a lot less if I happened to be playing audio 10 minutes later and it randomly paused due to an earlier press of the pause button.


Fuck, what is up with this?

I don't care about the audio latency this article is talking about and I never used to have this problem either. But recently, maybe just since I flashed latest CyanogenMod my phone sometimes responds instantly, but sometimes doesn't respond to the headphone button press until I press the power button or wait 30s+. It's incredibly annoying. I'm using CM on a OnePlus One with Xiaomi Piston 3 headphones.

Considering the power button works, I suspect this may have something to do with sleep states?


This, however, is not audio latency you're talking about. I've had the exact same issues, though.

I think the issue you're describing is something that is happening in AudioFlinger and processing the input events.


Did your phones use a double press of the headphones button to skip tracks?

I know on some device I have used there is a delay in interpreting a single press because the device has a longer window to allow for a double press. Although this is never much more than a second or two.


Maybe, but I doubt it's supposed to wait ten minutes for the second button press.


You might be experiencing a problem with incompatible headphones. There are several incompatible headphone types that use different control signaling techniques. Here's the first google result that I turned up on this topic, http://forums.windowscentral.com/windows-phone-8-how-guides/...


I very much doubt that every headset I tried, including several marked on the package as Android compatible, were incompatible with every phone I've had. And It wouldn't explain the delays for Bluetooth headsets or the lockscreen button.


Wow, I have been working on a music streaming app for Android for the past couple of years and I have never heard of this kind of issue, neither have I encountered it myself :/ . Judging by the responses, you do not even seem to be an isolated case.

I will try to reproduce the issue ... I am pessimistic though, since I use bluetooth headphones daily without any issue.


I have never experience latency here worth me noting much-less being annoyed with. I am not doubting it exist but from my experience my head phone button respond quickly(<1 second)

curious what phone and headphone type you guys use?

I use a Nexus 6 with a single button headphones.


Every press of every button on every headphone that I've ever used, wired or Bluetooth, on every Android phone I've ever used. I have literally not once experienced a satisfactory result from pressing a headphone button to (vainly attempt to) control an Android phone in my entire history with Android. (Let's generously define "satisfactory" as "any response at all within one second".) As I said, it also happens for the pause button on the lock screen of the phone itself. I've owned 4 Android phones: Droid Incredible 1 & 2, Galaxy S3 Mini, and Galaxy S3. On the Droids, it happened with both the stock HTC ROM and on CM7 (S3 doesn't allow non-Samsung ROMs). I have a Galaxy Tab S tablet as well running CM11, but I don't use that for listening to music so I haven't tested it.


Given the title, I would have liked to see some in-depth analysis on why the latency is as bad as it is? Which parts of the software stack are causing it?

Even though the Linux kernel is less than ideal for audio, it can still provide a latency low enough for music, ie. 10 ms or less.

This is an unbearable situation. There are lots of interesting music apps on the iOS that I would like to use, but only a handful are available on Android. And even if they are available, they are not necessarily usable.


> Which parts of the software stack are causing it?

There is a diagram lower down in the article with the components (ALSA, audioflinger etc).

http://www.androidpolice.com/wp-content/uploads/2015/11/nexu...


The infographic looked so much like an ad that I skipped over it.

So this is one of the "good" Android audio stacks, getting 36ms latency. But half of that is "wasted" in user space, in AudioFlinger as well as the app ring buffer (which seems pretty big to me). I'd like to see the comparison to a "bad" Android device.

Doing some internet searches, I found someone who had plugged in PulseAudio to an Android phone and got 20 ms latency instead of the 170+ ms latency from AudioFlinger on the same device [0]. That's pretty huge.

Ideally, there should be no user space middleware component processing audio in between, this could half the latency (according to the diagram), bringing the latency to acceptable levels. But ALSA isn't really intended for routing audio on the fly conveniently (e.g. when plugging/unplugging headphones), so there is some room for improvement too.

Overall, the situation is pretty sad. Linux can do better than that but a modern multimedia device (desktop or mobile) has several audio outputs and hot-plugging, which makes the situation rather difficult using ALSA only. And the userspace audio components just aren't great.

http://arunraghavan.net/2012/01/pulseaudio-vs-audioflinger-f...


Actually Android standard doesn't include ALSA. There is a thin sound HAL above ALSA, OSS and other Linux sound technologies.

Of course many phone vendors use ALSA to implement Android HAL layer. It seems that Samsung uses its own sound drivers and therefore they have small audio latency compared to other Android vendors.


cheesy video from google on it: https://www.youtube.com/watch?v=PnDK17zP9BI

tl;dr: it's easier to increase buffer sizes to fix underruns than to fix driver foo randomly disabling interrupts for too long.

That, along with not having fast-paths to bypass large parts of the stack (mixer, resampler, audio effects, etc...), and bam, high latency.


The overall latency (not audio, but the responsiveness, in general) of my Nexus 7 2013 got much worse in Marshmallow. It now takes a second or two after clicking anything to get a response. I'm not sure what happened, but battery life also dropped immediately after the OTA update to 6.0. I don't have a lot of confidence that audio latency has improved, given the overall increased sluggishness of the device. I have a few music apps (Caustic, several DJ things, a couple of classic style trackers), but haven't tried them lately...partly because latency has always been such an issue that it wasn't worth trying to do stuff on it.

Nonetheless, I avoid Apple products on principle, and also enjoy playing with audio stuff (so much so that I started an audiovisual company a few years ago, which I only recently extricated myself from, leaving it in the hands of one of the co-founders), so I'm pleased to see the issue getting attention. I'd love to be able to make real music on my tablet. I have a Gameboy that I use for "on the train" music tinkering, because it has an immediacy that is lost in most modern devices (not merely latency...just the general composition process on something so simple it's impossible to get bogged down in bullshit like selecting the perfect snare sample or something). It's be cool to be able use the Nexus 7 for that purpose.


Uh, while I think it's perfectly reasonable to like using a Gameboy for music tinkering, that whole "immediacy" thing you cite about composition process is essentially what got solved with Propellerhead's Figure. I've had it since the first version and it's grown into my personal, portable drum/bass/synth sketchpad and recommend it to anybody serious about composing on the go. Just because you don't know what's out there doesn't mean it's not out there, for what it's worth.


Figure is an iPhone/iPad app. There is no Android version.


I know, which means that until Android gets its garbage latency issues sorted out, having principles a la "No iDevices" simply cuts off the nose to spite the face.


That's your position, which I have not made any assertion about. My position is that I don't buy Apple products. That is not to say I believe Google is vastly better, from an ethical perspective, but, Android is more open. Until I can buy a reasonable Firefox phone in the US, Android will have to do.


If I understand correctly Apple products can be "jailbroken" if you don't want to buy a new product and primarily want to use it for actual artistic endeavors. I was simply contradicting your assertion that music software isn't up to par with quick composition, because it is, and to assert otherwise is simply ignorant. In your case, willful, so that's just your own problem, as I noted.


Which part of my comment led you to the conclusion that I was ignorant? I specifically mentioned that I'd ruled out Apple devices on other grounds, which, I would think, would be sufficient evidence that I'm aware of the capabilities of those devices. I also mentioned that I founded an audiovisual company; one might conclude from that statement that I've recently worked in the field professionally. One might draw some conclusions from that, as well. I'm just not sure how one would draw the particular conclusions you've drawn, based on the comment that I made.


Direct link for the superpowered.com latency testing app: http://superpowered.com/latency


IMO the biggest reason to be optimistic that this will get this fixed is VR. GearVR had to rely on its own audio stack, but if VR takes off in a big way then I'm sure Google will want to support in stock Android, and that'll only work if they re-architect the Android audio stack to support low latency audio. This could end up fixing the issue for desktop Linux too.


If you want an in-depth explanation (AP mostly skims the surface) : https://www.youtube.com/watch?v=d3kfEeMZ65c is a must-watch. Obviously, it does not cover the work accomplished these past two years, but it gives a good idea of the challenges of audio latency on Android


You can watch this I/O session from a few years back:

https://youtu.be/d3kfEeMZ65c

It probably provides the best insight to the problem Google engineers were and probably is still facing in regards to high performance audio on Android. Even when the latency is brought down, there's still other audio related issues/glitches (like buffer underruns).

Even now that Android is measurably better than it was previously, things like buffer underruns can still affect the overall usability. Increasing the buffer (to fix the underruns) render the device less suitable for such things as using it with external MIDI controller. You can see from Superpowered's audio latency measurements:

http://superpowered.com/latency#cta

that the buffer for iOS is half that of the best Android (the dual-core Nexus 9)- 64 instead of 128 for Android.

I


> (from the article) [...] It did unquestionably require more time to code in lots of low-level functionalities that were simply missing from Android's SDK — and even after all that work, the end result was still far from ideal.

That being said, it's not clear whether the improved latency on newer platforms can be achieve without using Android NDK. I haven't used it, but I know Android 4.0+ supports OpenSL through NDK, and I'm guessing that if you want low latency you need OpenSL. By skimming through the docs, it looks like that's the case, isn't that so? http://source.android.com/devices/audio/latency_app.html

So, maybe it does solve the latency issue, and I would gladly use NDK because I like C++ stuff, but I think most developers wouldn't want to do it.


I have a virtual hearing aid app for iPhone(Enhanced Ears) that simply is not possible right now with Android. I've had potential customers clamoring for an Android port, but I don't want to develop an app that has crippled performance. The audio coming in to the user's ears needs to match moving lips.


A few years ago I got into a project that involved pretty tight control of audio latency, in a system with a fair amount of legacy.

"How hard can it be?"

At the end of a year, imagine someone who has spent a year in the Alaska wilderness, strangled three grizzly bears with his bear hands, lived on berries and nuts and fish caught in frigid glacial streams, and generally been up and down the audio stack in the OS and a couple pieces of firmware, involving audios APIs, a USB software stack that we re-wrote, a close examination of USB hardware and how chipsets are fucked up, and how exactly I2C microphones really work. My wife described me as "broken". It took a while to recover.

How hard can it be? Isochronous, low-latency audio can be pretty hard.


Hi all -- Patrick from Superpowered.com. We're working on an update to our Android 10 ms Problem article with new data --- quick question: What's not clear about Android latency? What questions would you like answered that you haven't seen answered online?


Given that the android tv devices are (imo) more important in this aspect than a phone or tablet, I would have liked to see what latency was on things like the nexus player, shield tv, sony tv (baked in), apple tv, and roku maybe. I assume they would be similar, but perhaps they made some special modifications that will later be incorporated into AOSP?


I don't understand why you think that round trip latency matters at all on tvs and media consumption devices.


I misunderstood that the issue was input-to-output latency, and was instead thinking of audio/visual sync issues.


wrong. Latency matters when you're playing/interacting with the device. watching TV is a passive activity with no user input, latency does not matter in that case.


Ahh, I see. They are speaking of a specific type of latency.


I have been writing a game on my spare time that has an audio sequencer for interactive music.

It currently runs on iOS and OSX.

Initially I thought about doing an Android version as the code is all in C++ and it would be easy to port, but if the latency is as bad as described here I might not bother.


> The fact of the matter is that sound (along with everything else we know of in the Universe) travels at a finite speed. In regular conditions, a sound wave propagates at around 340 meters (or 1100 feet) through the air in a single second. This means that even for plain old physical instruments, there exists an audio delay between the moment the instrument is played and the instant the sound reaches the musician's ear.

You heard it here first folks, in a few years Apple is going to come out with the revolutionary "Cochlea sound", which has such a small delay your ears are unable to perceive that there is any delay at all (while listening from 10-12 inches from your ears of course)!

It will be magical and delight us, and something we never realised we needed until we were told we did. It will become the new standard and a household name, and we'll all have to catch ourselves when we say it and try to use some obnoxious generic term instead (LoALPS perhaps: Low Audio Latency Per Sound).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: