Hacker News new | past | comments | ask | show | jobs | submit login

Because Google is a direct competitor to them.



Apple traditionally ignored non MPEG codecs[1], VPx is ignored for the same reason, and not due to being developed by Google.

[1] i.e. Vorbis, Theora, Flac, matroska containers, etc. Opus is supported in Safari for WebRTC because they have to, but is completely unavailable in other contexts.


Actually, VPx was technically an acquisition[0] by Google. Not sure it matters as they’ve evolved since, but it is somewhat notable none the less.

[0] https://en.m.wikipedia.org/wiki/On2_Technologies


They didn't ignore Flac, they made an Apple-only knock-off of it.


I speculate they needed a DRM-capable container in case the rightsholders demanded DRM when iTunes started selling lossless audio files.


ALAC did show up at the same time as the first AirPort Express in 2004, and it is the format used to transport audio over AirPlay (first called AirTunes). That may have been the original impetus.


I was under the impression it was an AAC project that didn't get adoption outside the Apple ecosystem, not that it was Apple created. Do you know whether Apple was the submitter to ISO?


It sounds like when you say "an AAC project", you're thinking of MPEG-4 ALS [1], which was the codec chosen by MPEG for lossless audio in the mid-2000s and last updated in 2009. Despite ALS being blessed by MPEG, it never achieved much acceptance in the market. Its most direct ancestor was LPAC [2], developed at the Technical University of Berlin; but ALS included improvements from NTT Communication Science Laboratories and RealNetworks [3].

Apple developed Apple Lossless ("ALAC"), a different, unrelated [4] format, which was fairly similar to FLAC, but different in some minor ways (paraphrasing the FLAC dev's own words [4]).

[1] https://en.wikipedia.org/wiki/Audio_Lossless_Coding [2] https://en.wikipedia.org/wiki/Lossless_predictive_audio_comp... [3] http://elvera.nue.tu-berlin.de/files/0737Liebchen2005.pdf [4] https://hydrogenaud.io/index.php/topic,32111.msg279843.html#...


Actually FLAC and Opus support was added to the Core Audio / AudioToolbox / AVFoundation APIs in iOS 11. Though the Opus decoder only supports raw packet decoding, i.e. you still need something to decode the container (usually ogg).

Also, FLAC file decoding is horribly slow - compared to the FOSS reference decoder -, because for some reason the whole file is scanned through on the first read when using the AudioFile APIs (AudioFileReadPacketData). (I’m gonna make a demonstration project for this and send them a bugreport tomorrow.)


Fyi, here’s the demonstration project I talked about: https://github.com/tzahola/AudioFileFLACBug


So bizarre to write their own decoder, especially given libFLAC is BSD licensed.


It actually makes sense, because libFLAC has no public API for getting the compressed audio packets as is. Rice/huffmann/whatever decompression always happens inside libFLAC.

With Apple’s AudioFile API you can decide whether you want the decompressed PCM samples, or the underlying compressed packets. You can get better energy efficiency if you offload the decompression step to the dedicated coprocessor. The difference is smaller than with hardware vs software h.264 decoding, but there is a difference.

(For lossy codecs this also makes sense when you want to forward the compressed packets to a wireless speaker in order to avoid lossy recompression on Bluetooth transmission.)




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: