PipeWire is how audio should be on Linux, and it's ready to use. No more complications between PulseAudio, ALSA, and JACK. PipeWire implements all 3 of these interfaces, which means you can use applications that depend on any or all of these interfaces simultaneously with no conflicts. Playing a video in Firefox and a track in a digital audio workstation at the same time works with no special configuration. PipeWire makes audio on Linux as easy as it is on other OSes.
It is a shame it took so long but it is finally here. The feeling may be similar to when pulseaudio arrived, but it is not the same thing; Pipewire implements everything that is needed; users will no longer need to care about handoff between alsa, pulseaudio and jack. It manages video streams too and the latest version of OBS even makes use of it to allow screen sharing on wayland running from flatpak. Finally!
I've been watching linux since the OSS days. I used oss, alsa, arts, esd, jack, pulseaudio... never it has been looking as well aimed as it is this time. Finally!
I haven't read the article yet, but how big a deal is this? Is this something that distros are likely to adopt on top of or in place of PulseAudio? Does this make any existing part of the Linux audio stack/ecosystem redundant, or is it strictly solving a problem (or problems) that had yet to be addressed?
Yes, it replaces PulseAudio and Jack. Its goal is to provide a single audio daemon that can serve both consumer applications (where power consumption is typically most important) and pro applications (where latency is most important). Currently those are served by PulseAudio and Jack, so it's very exciting to have a single service that can cover both use cases.
It also replaces part of ALSA, which is two things: the kernel interface to talk to audio devices, and a set of user libraries on top of that interface. PipeWire still uses the former, but has its own, better API for the latter (in addition to supporting ALSA, PulseAudio, and Jack APIs).
Does it fix the Bluetooth audio mess? Ubuntu 21.04 still can't do videoconferencing properly via Bluetooth as only ancient 'string on a tin can' codecs are supported.
And getting it to just stick to unidirectional audio (relying on the internal mic for input, which seems like the only practical option) took some serious trial and error. It still occasionally sends a stream to my headphones when nothing is playing, which prevents other devices from using them (my headphones let you connect two devices but only play audio from one of them).
If it does address these issues, I hope Ubuntu will soon officially support it (I've learned better than to use non-defaults, it just invites pain).
> Does it fix the Bluetooth audio mess? Ubuntu 21.04 still can't do videoconferencing properly via Bluetooth as only ancient 'string on a tin can' codecs are supported.
> I've learned better than to use non-defaults, it just invites pain
Funny, after using Ubuntu for many years I switched to Arch, just because I got hit by so many upgrade issues when "things should just work". With Arch I can only blame myself, which is refreshing.
Agreed. The latest Debian upgrade installed pipewire for me, and I didn't even notice, everything just worked fine! And that after years and years of fiddling with alsa and pulseaudio configs, uninstalling pulseaudio, hacks upon hacks to get stuff working with either, etc.
Should have mentioned, I'm running testing/unstable. After the stable release, all the new experimental stuff for the next release floods into unstable.
It actually does more than fixing the mess now. With pipewire >=0.3.34, there is now support for a2dp duplex with both faststream and aptx-ll codecs (if the headset also supports them, of course). No more hsp/hfp crap.
I think linux + bluez + pipewire is the first ever software-only stack that can achieve this, as a USB dongle would be required on other platforms. We shall declare 2021 as the year of linux on the desktop!
Seems that Pipewire and PulseAudio have both just recently fixed this. So yes, but it may not technically matter anymore. But it will be upcoming in a future Ubuntu it's not in the current shipping releases and I'm not sure it will make it into 21.10.
I used to criticize PulseAudio a lot because for a time it was growing too much, acquiring way too many responsabilities, which would eventually make it one of those programs that is "way too hard to replace" in the Linux Desktop world, so we would get stuck with it for a time.
I am glad to have been proven wrong. I was extremely surprised at the progress PipeWire made in a couple short years. I have been barely able to find any issues even after a couple of months of using it. In comparison, I filed a dozen PulseAudio bugs (some of them with patches, never applied) before I just gave up...
It's not quite as easy as on macOS, but then again, macOS doesn't offer interapplication audio routing (the "JACK part of Pipewire"), so the comparison is not entirely fair. Pretty close though.
I was describing what macOS does not not allow, and that's routing audio between applications. When I say that, I don't mean that it is impossible - there are numerous tools (including JACK) that will allow it - but it doesn't come with just CoreAudio itself.
Thanks for writing Jack! I love the fact that I can play something in vmpk, render using qsynth, process in rackarrack and record directly in audacity. The routing feature alone sets Linux apart from the competition and I'm happy it influenced pipewire to implement it.
Linux is already deployed in some commercial daws and it will soon be an RTOS. Hope music production will have more and better options because of this.
There have been "RTOS" versions of Linux for more than 20 years. The current mainstream kernel already includes almost all of the RT_PREEMPT patch set that makes it more or less an RTOS if you want it to be.
Echoing the parallel comment, if you're willing to get your hands into the guts, you can get amazing "RTOS-like" audio performance out of linux. I love pushing the limit of how low latency can get. Start with the PREEMPT_RT patches, be sure your irqs for audio are prioritized right, then experiment with pinning individual processes to specific cores, as well as locking their address space in memory.
Thanks for writing JACK! I'm curious what you think of PipeWire, feel free to be opinionated.
I'm also a low level audio/midi dev (tho far less prolific, and less public) and having seen how you architect things, and your code, I'd value your opinion.
Rewire is fairly different. It still exists, and also existed for macOS too. But to use it, applications have to be explicitly coded to use it (and linked against the rewire SDK).
That's quite different from JACK, SoundFlower, Hijack and the innumerable other inter-application audio routing systems for macOS, which can be used without the applications knowing anything about them.
The functionally named but utterly undescriptive GraphEdit, which was at some point part of the DirectX SDK allowed you to build arbitrary pipelines (that normally you'd have to configure via code in your app) for input/output of multimedia data. I haven't done directX programming in over a decade though so I have no idea if it's comparable to ReWire, but GraphEdit was an unsung hero back when I was.
The DirectX SDK in this respect is like GStreamer on Linux: it allows an application to build arbitrarily complex pipelines for data processing. Emphasis on within the application.
However, it did not and does not provide for inter-application audio/MIDI routing, which is what tools like JACK, Rewire, SoundFlower, Audio Hijack and others are all about.
Different people may have different definitions of "ready to use". For me, pipewire works mostly great; it fixes more than a few issues I never could solve with pulseaudio. But pipewire-pulse has crashed more than a few times for me (though not since I last upgraded pipewire, so keeping fingers crossed that it's been fixed). I wouldn't recommend it (yet) to someone who isn't prepared to tinker.
PulseAudio already covers the ALSA interface, but I'm glad that PipeWire now unifies those two with JACK and audio workstation needs, as well as handling video.
I am not sure if ALSA playback should go through an audio daemon. Some audio applications (for example audio workstations that you have mentioned) want to have exclusive access to an audio card with minimum latency, without any sample conversions, any volume adjustment and ALSA (as I understand) was supposed to provide such access. Audio daemon prevents them from getting it. Passing data through a daemon adds latency.
Wouldn't it be better if daemons like PulseAudio or PipeWire gave control of an audio card to an application that wants exclusive access?
For example, Windows API can provide exclusive access to an audio card.
Now if only pipewire-pulse actually worked or PipeWire just let me use Pulse alongside it for the time being so I could actually get all those improvements...
I'll definitely retry every 6 months or so, but for now, I have to spend my time getting actual work done, not experimenting with yet another shiny and new, but unfinished and unstable toy (Wayland flashbacks intensify).
Imagine you're a streamer with OBS and you need to patch your mic, game sound, music stream, etc. and adjust all the levels. It's a PITA to do this kind of thing with pulseaudio - which seems to have been designed with the model of "one user, one sound device" in mind. So people use JACK for this kind of thing, which is similar to what you can get in Windows (with virtual audio cables, virtual patch bays, etc.) that let you route your sound however you need to. But it is also fiddly and painful to use, especially trying to get it to interoperate with w/ pulse. So Pipewire is meant to replace all this with a single solution that is easy to use and "just works".
I've been doing this to record video conferences, separating my mic, conference output, and rest of desktop. So I record my mic and the rest of the conference with OBS (as two separate tracks) and can still hear conference + other apps (but other apps aren't recorded).
I'm doing this with PulseAudio. It adds a little latency but it works. I initially used pagraphcontrol [1] and later made my own little Python script that creates the virtual devices ("null outputs") and virtual wires ("loopbacks") and wires conference applications automatically when they show up.
Can I do this with PipeWire? Are the tools there? This post claims that PipeWire is now ready, but it seems to have reached version 0.3.35 which seems a very arbitrary point to call a milestone.
Original motivations IIUC were that PA does not do synchronized video, has no security model, and does not meet Pro latency requirements.
Requirements were driven by complications found managing streams in automotive uses. E.g. routing back-up cam video, soft instrument cluster video, bluetooth phone audio, dash mic, car stereo player, sirius receiver, voice navigation, dash cam.
For the last ~12 years, I gave PulseAudio a try every time I upgraded the Fedora distro on my main computers. I almost never had the catastrophic bugs many people complained about. But pulseaudio has always taken 10-15% of one CPU when in use (be it on audio/video calls, or music/movie playback). This is across 3 different laptops and 2 desktops. It is not a huge deal, but it is a bit frustrating on laptops where battery life is precious. I understand that PulseAudio is very powerful, but it represented no practical gain for me personally, with my very simple use cases, so I simply disabled it.
Weirdly, I found very few people complaining about PulseAudio's CPU usage. Maybe nobody cares about 10-15% of one core.
However, I am happy to report that PipeWire seems to not have this problem at all. Since it became default on Fedora, it has barely shown on top's first page. Given that it is even more ambitious than PulseAudio in terms of latency, this is incredibly impressive!
Note that PipeWire was initiated by Wim Taymans, one of the driving forces behind GStreamer. I think it is one case of the right people in charge of the right projects :-).
Similar thing for me. Pulseaudio sometimes takes a bunch of CPU with audio playing.
Even stranger, sometimes it keeps taking 5-10% CPU with no audio playing. (I think it is related to running out of memory for a moment and using swap, but no matter how much memory I free up, the computer never returns to normal and must be restarted).
I'm getting to the point of installing debug symbols and pointing perf at it, but dropping pulseaudio would be an even better solution.
Parent comment likely refers to kernel CPU time accounted for in the generating process. If an application passes an audio sample to PulseAudio, it is charged for (at most) a memory copy and one or two syscalls, while PulseAudio is charged for the time spent in the kernel driver.
Pulseaudio burning CPU during silence is usually due to an application holding open a channel that is either really playing silence or is keeping the Pulseaudio machinery awake. You could blame the apps, but IMHO that is definitely a Pulseaudio problem
Yes! I experienced the same. Just never publicly complained about it. It's the only constant running thing having high CPU. PipeWire, though still somewhat power hungry, is a lot better.
Pulseaudio eating too much CPU happens on an Ubuntu machine of mine. I simply type "pulseaudio -k" to restart it. But this kind of action should not be needed, I'm the user, I'm not supposed to fight the system, the OS should work for me not the other way around.
Are there any particular circumstance where it takes 10-15% of one CPU for you or is it whenever there is sound playing? I just checked mine while playing music on YouTube and it bounces between 4.0 and 4.8%
Whenever there is sound playing or recording. I think it was lower than 10-15% on my most recent desktop, so maybe 4-5% makes sense if your computer is powerful. (While I fully appreciate that it is a very minor annoyance, 5% still frustrates me because PulseAudio added no value to me. I also understand that it added features very useful to other people.)
Pulseaudio is opinionated in favour of quality. For example if attaching an input to an output in pulseaudio it will resample by a minuscule amount to sync the clocks (eg 47999hz to 48000hz).
This sort of thing does not come for free, and any other solution that is using less CPU may be cutting corners in areas like this.
For me Pulseaudio is still my weapon of choice for running a dedicated realtime DSP where the input and output are on separate cards/clocks, but I can see how it has grown beyond sane scope for ordinary desktop usage
One time, many moons ago now, I noticed Pulse taking a reasonable fraction of a core and thought I'd zap PA to try to use less CPU.
Unfortunately, using my media player without PA actually used more CPU overall -- the audio player's CPU use went up by more than PA had been using. So I went back to using PA.
There is a sibling comment describing the same experience. But I think that this behavior can't be reproduced nowadays.
I think it must depend on the relative performance of the software resamplers involved (ALSA's in-kernel plughw vs. PulseAudio's vs. your media player's).
Do you remember which media player triggered this?
I just tried, but I could not reproduce this on my laptop here. The overall system load is consistently higher with PulseAudio when playing an audio file with mplayer -- even when using "plughw" (which allows in-kernel software resampling) instead of "hw" (which does not). It must be hardware- or system-dependent.
Also, I'm curious why this happens on your system. PulseAudio itself uses ALSA (the standard kernel API for audio) behind the scenes, like any application would, directly, in its absence. In addition, it must carry audio samples across process boundaries and perform software mixing/routing, all with low-ish latency (hence small buffer sizes). All this with negative (up 25%!) CPU usage?
One possible explanation: I did read many times that PulseAudio "fixes ALSA bugs" or "fixes buggy applications". However, I can't remember encountering any such bug in the last few years (or any PulseAudio bugs either, to be honest). Nowadays, most applications that can use ALSA directly do so through alsalib, which could just as well iron out such issues if any remain.
i7-6500U (I don't have access to this one right now, but it was a Asus ZenBook flip -- on this one I've definitely seen 15%+ usage by PulseAudio during video calls.)
i5-7200U with conexant CX8200
The latest desktop is:
Ryzen 5 2600 with realtek ALC887-VD (on this one the CPU load was lower than on the others, but still there)
Wanted to try debian 11 last weekend. First time I installed it from debootstrap. I plugged in a 1TB SSD on an USB 2.0 to SATA case, formatted it, prepared the root filesystem with debootstrap, chrooted to it, installed a bunch of packages, setup locale and timezone, installed linux-image, grub and booted it. Worked flawlessly.
Tried a "ps|aux" and discovered it is already running wayland and pipewire. Works so well it is boring. Nice!
This has been my experience with Wayland/Pipewire - Things just work.
I've been blown away by how good things like touchpad/gesture/multi-monitor support are in Wayland as well.
I tried really, really hard to run linux on a laptop in 2010, and eventually just settled on virtualizing it for acceptable device compatibility.
In 2020, my linux laptop genuinely feels much better than the macbook my office issued me. Like - actually better. Not "Better because I control it and it's free/opensource", just "Better".
>In 2020, my linux laptop genuinely feels much better than the macbook my office issued me. Like - actually better. Not "Better because I control it and it's free/opensource", just "Better".
Yes! This is the point of Linux and other open source. We don't use open source because it's "morally better", we use it because it consistently produces superior solutions.
Long time daily Fedora user here. PipeWire is what we've needed for so long in terms of audio, not to mention more recently in terms of screen capture with Wayland.
The only issue I've run into is that for some apps (Zoom, Firefox, Chrome) which are set to use my "default audio device" for input or output, it selects the correct device initially, but if I change the default/system device (via gnome sound settings) later while it's in use, most (all?) of the time, the application continues using whatever device was selected initially.
I haven't bothered to debug or see if this is a known issue as it hasn't bothered me much since most apps allow switching to explicit input/output devices which works fine. I believe (but haven't definitively confirmed) that changes via pavucontrol also work, so perhaps it's something with the gnome sound settings?
Anyway, huge thanks to the devs who put their time, effort, experience, and talent into making PipeWire happen. It's a huge step forward on multiple fronts
Pipewire is great, but it is sadly lacking in end-user documentation. With pulseaudio, if something broke or needed tweaking - I had thousands of forum posts to go through. Even the Arch Wiki has a PulseAudio/Troubleshooting section with various configuration suggestions.
For Pipewire, there is no end-user documentation that I've found so far.
On one my setups, and all media stops playing, including videos if I forget to plug in my headphone. Restarting pipewire or plugging in devices after that doesn't seem to help, and I have no idea about the pipewire configuration files to even attempt any changes. I would really appreciate some end user debugging and configuration related documents.
If you were to experience a novel or rare problem with PulseAudio, you would be in the same boat. The likelihood of a user encountering a problem is more apt than assuming as a given that you're dealing with a problem.
Currently you want bleeding edge PipeWire. Many underrun issues are addressed with newer versions. Archlinux seems to stay updated, if you're not using a rolling distro (E.G. Debian) you might need to pin some packages from a future version.
I also have this from earlier when I had errors; now I use it only right after startup because I'm doing output to multiple audio cards simultaneously.
(The -multi one is a local service based on the guide for multi-output audio.)
If you see log entries similar to: pipewire-pulse[1173]: client 0x55f2d47effa0 [Firefox]: stream 0x55f2d484f7d0 UNDERFLOW channel:0 offset:221184 underrun:16384
I fixed this for multi-output audio by 'increasing the headroom'
cp /usr/share/pipewire/media-session.d/alsa-monitor.conf ~/.config/pipewire/media-session.d/
vi ~/.config/pipewire/media-session.d/alsa-monitor.conf
There's a section near the end for alsa_input and alsa_outputs, WITHIN actions update-props, I un-commented and set:
However I believe that's specific to my multi-audio output setup and a mixture of different soundcards; I could probably hunt for a lower headroom value.
I find it beautiful that the LDAC codec used with Sony wireless Bluetooth headphones only works on Linux through PipeWire. macOS is stuck with AAC, and Windows gets aptX. Linux gets all of them.
> macOS is stuck with AAC, and Windows gets aptX. Linux gets all of them.
I believe 21H2 is slated to bring AAC over to Windows, along with some other Bluetooth related enhancements. (They now merge the Headphone/Headset devices so you don't get the "I have no audio whenever I start a call" confusion with playback switching)
The $350 WH-1000XM4 has invariably higher-bandwith, higher-resolution audio than the $500+ Airpods Max. Whoever set that price at Apple is squirming in their seat, I can't seem to find it being sold new for more than $550.
This is part of the reason why I returned my AirPods Max two days after trying them out and stuck with my Sony WH-100XM4s. Even though the AirPods were better for quick switching between devices, have a better voice chat experience, and have better noise cancellation, they don't universally sound better. I wasn't impressed at all by the sound quality. They also clamp extremely hard on my big head compared to the Sonys. The lack of comfort on such expensive cans is a big deal breaker.
I'm keeping a close eye on what Sony does next, because most of what they're doing now is really good.
I've had the previous version of those earphones for almost two years now, they've been working flawlessly. The only issue is the foam in-ear insert is getting old and less comfortable, but that's a cheap fix on Amazon (and not a problem with the plastic inserts).
I switched over from pulseaudio to pipewire last week, and I can say it just works (tm), which is the highest praise I can give something like this as a non-power-user.
Switched to pipewire last year at some point and it has been remarkably pain free.
Bluetooth support seems more robust (no more weird 'have to reboot my computer to connect to these headphones for some reason') but the real headline feature is I no longer have to screw around to get pulseaudio and jack to coexist peacefully (nigh on impossible in my experience)
I don't have that experience (using AAC, that is), but I do enjoy being able to select the Bluetooth audio codec with Pipewire (SBC, AAC, SBC-XQ, mSBC, etc.).
The array of supported and configurable Bluetooth codecs are the reason I switched. Now I can use AAC for music, SBC-XQ for video and interactive content and switch to mSBC when I need to make a call, all from the user interface.
> I just don't want to try to hack a bunch of configuration files from pulse.
When I migrated from PulseAudio to PipeWire, I started by deleting all PulseAudio packages and installing the PipeWire packages. Then I rebooted and as it turns out, I was done. No fiddling with config files required here. Everything worked immediately.
I moved from PulseAudio to SnapCast for streaming audio over my local network as it was easier to configure multiple sinks for multi-room audio.
It looks as though there is support for similar with PipeWire, with the ROC sink and ROC source modules[0][1]. I'm going to have to set aside some time to have a poke around with this and see how it performs.
I’d appreciate any notes on your experiences doing this, even in a trashy pastebin dump. I briefly looked at this but didn’t have time to implement it correctly, and suspect there’ll be a few gotchtas given the relatively baby age of the project
Do you have documentation or any blog post on your setup? Lsat I tried snapcast, it worked, but eventually had too much lag between nodes. And there is no UI bundled by default, so I was wondering what the most mature frontend would be.
Not sure if this is what you’re looking for but I use Snapcast to stream from a record player to anywhere on the network [1]. Snapweb [2] works really well but has some issues on iOS currently. Some alternative clients are listed on the main repo [3].
I've found the AV sync considerably off watching Youtube videos in Firefox with Bluetooth earphones (not tried other combinations) since switching to Pipewire. It's not something I do enough that I was bothered to switch back to Pulseaudio, and haven't really dug into it yet.
But while it's mentioned here.. anyone else have that, or know what it might be?
I use Bluetooth ear buds (OnePlus Buds) daily on Fedora with Pipewire and haven't noticed any perceivable sync issues myself.
Edit: Other comments are mentioning codec selection for Bluetooth in relation to lag/sync issues. I'm using whatever is default on Fedora 34. I've never looked and am not at my computer to check right now unfortunately.
I've replaced jack by pipewire for all my audio work. It works pretty well, still a few xruns occasionally but it has been really transparent with pw-jack and reduced a lot the various crashes I would see with jackthat would kick all the clients from time to time.
PipeWire is fantastic already, and I can't wait for it to take Linux audio to where it should and can be!
I've got my broken mess of a configuration for alsa, mpd, mpv, Firefox and pulseaudio all working again after a long, broken spell.
The only thing I haven't gotten working is the upmixing of channels for 5.1/7.1 audio channel movies on mpv. Currently I rely on my alsa upmixing setup in ~/.asoundrc but am looking forward to having that work with Pipewire alone, hopefully soon.
There was a tool called PulseEffects that provided a lot of useful audio effects for PA, but when PW got more stable the author dropped support for PA in favor of PW, and the name was also changed to EasyEffects.
I started using it recently (also for EQ) and it works fairly well despite still being in very active development.
The cool thing is, you get all the benefits of pipewire even if the audio comes in over pulse's API. All pulse audio applications magically got faster/lower latency/less buggy for me when I started using pipewire.
I’d love to try it out, but for the time being I’m running Ubuntu (for work needs) and there doesn’t seem to be any trivial/non-invasive ways to get PipeWire to run on a regular Ubuntu desktop install.
Maybe it makes it to next LTS as an apt-gettable option? I’ll keep my hopes up :)
I don't know about any LTS versions, but 21.04 has pipewire in the standard repo. I expect PW to be in the 22.04 LTS based on that, but I don't expect it to be the default any time soon.
This PPA [0] works fine for me in 21.04 to get the latest version. Support for the PPA seems to go back to Bionic, so you should be able to use it on recent LTS versions of Ubuntu as well.
The only annoying part of the whole process is that the migration requires running a bunch of commands to stop and mask PulseAudio and to enable pipewire. This guide [1] worked great for me.
Ubuntu has been burned by adopting pulseaudio too early. I've had many problems in 2008 and still have some today. They will probably be more conservative when switching this time.
Has anyone had a chance to test how Pipewire works with low latency? I'm excited at the idea of replacing my Pulse+ALSA setup, but I'm also looking for the lowest possible latency I can get as I use my DAW as an instrument :)
It works as well as your hardware and sources support. Pulseaudio added notable latency on my system to begin with, and pipewire defaults to minimal instead; which has corner-cases that are covered by reverting to more "headroom" (device buffer?) similar to the existing behavior of alternatives.
I switched about three months ago from pulse to pipewire. Since then I've been experiencing no audio output almost every day I resume my laptop from suspend (s2ram). I did a few tests: reinstalled all the pipewire packgages, removed configuration directories, upgraded to latest available version (Manjaro = 0.3.34). So far nothing did help. The only I can do to fix this is restart the pipewire services. I'll wait for 0.3.35, but if the issue still persists then I'll switch back to pulse...
When I first tried it a year ago, it worked for some time until some update that broke my headphones. Half a year ago, when I installed another distro I gave it another shot and it worked OOTB. But unfortunately a month ago my headphones stopped working. But they still/always worked in pulseaudio. There seem to be a few issues with my symptoms, yet the maintainers haven’t fixed the regression yet…
Until then I’ll be having to use pulse
Depends on what desktop environment (Compositor, specifically) you're using. Gnome (Mutter) has RDP support built in but the documentation is pretty scarce and it only has a CLI interface. I'm not aware of any other distros with RDP support over Wayland, they mostly only use VNC.
I tried to hook up a midi keyboard to a raspberry pi, for my kid to practice piano but it didn’t work well. Was Ubuntu Mate with timidity soft synth. Latency was about half a second from key press to sound which made it unplayable. Not sure what else could be done but hopefully this project is a solution.
I use an RPi4 for a high end self-built e-piano (StudioLogic weighted key controller, Pianoteq for synthesis, ART monitors, HifiBerry pro-audio DAC). Latency is fine, but you won't get the same results with stock Ubuntu Mate. The "solutions" don't involve Pipewire per se, although in the future that might be the technology in use.
Thanks. In debug mode I saw midi messages immediately so don't think it had anything to do with the keyboard itself or the input connection. The delay seemed to be on sound output.
The only thing that seems pertinent to the discussion is perhaps the "pianotec" which I'm not familiar with. Is that improving on stock Ubuntu somehow?
Pianoteq is a proprietary physical modelling synth (it computes the sound of a piano for every note; no samples). That is not relevant.
Lowering latency consists of balancing two opposing "forces".
To lower it, you "simply" need to tell the software that you're using to use a small buffer size. This reduces the time between the MIDI event arriving in the software and the software producing some sound corresponding to that event.
However, there are limits on how low you can go, some caused by hardware and some caused by software. The kernel configuration itself can play a role, along with a variety of different devices and device drivers. This document tries to help you understand the sorts of system (hardware-ish) issues that can prevent you from reducing the buffer size:
Ensuring that your software runs with realtime scheduling (SCHED_FIFO) is where it can be necessary to adjust stock Ubuntu systems to give you permission to do this (you are not allowed to do so by default). In addition, the software you're using (Timidity) will not necessarily do this by itself.
If you want to know more, I'd suggest visiting linuxmusicians.com and using their forums.
Whatever is standard with ubuntu, pulse audio with defaults I believe. At some point I installed jack but it didnt seem to help. I gave up as several hours were all I had to spare. :-/
PipeWire is a fantastic bit of software and it's great on the desktop. I would play around with it in the studio but it lacks a netjack2 equivalent. zita-n2j is not a replacement since it requires the client to have audio hardware.
I'll stick with Jack2 + Netjack2 in the studio until that gets sorted.
PulseAudio was and is the disaster that the LAD community has already predicted. Whenever a package manager forced it onto me, I had to figure out how to get rid of it again. Hopefully PieWire will improve things.
In my experience, the Manjaro and Ubuntu package managers make the PipeWire packages "provide" the necessary PulseAudio metapackages to avoid dependency issues. I guess that means even if you don't plan to switch, installing PipeWire right now may help you prevent your package manager from installing Pulse as a dependency.
Patiently waiting for Linux to have one audio server that will offer compatibility to all previous ones (that is, all software requiring other audio systems such as oss, alsa, PA, Jack, etc. will be handled gracefully without installing them) while encouraging direct support for new apps, then slowly phasing them out until we have a consistent, possibly layered, low latency audio stack that scales from small single board computers to full sized audio workstations. Choice is a good thing, but too much of it can have negative effects by slowing development and porting down (see window/desktop managers).
PipeWire is aiming, and positioned to be this service. I don't have a particular case for low-latency audio so I can't comment on that but it is amazing that you can use pavucontrol to tweak your device configuration and volumes then use qjackctl to wire the devices however you want. I'm impressed at how well it works and it even supports better bluetooth codecs than pulseaudio does.
It has already shipped by default in Fedora and is available in Arch and NixOS. It does look like it will quickly wipe out PulseAudio by default. I'm sure Jack and ALSA will hang in there longer but they are already used much more rarely (by users, the ALSA API will stick around for decades more I'm sure).
Interestingly, FreeBSD and Linux both started with the same audio stack (OSS) and both rewrote it twice[1], but FreeBSD has kept the original OSS API while adding essentially the same features, it seems.
The big selling-point of ALSA, from what I remember, was that OSS couldn't provide mixing to support multiple applications - but FreeBSD's OSS does that. Some of the main selling-points of PulseAudio are per-application volume, low latency and high-quality resampling, all of which FreeBSD's OSS claims to do as well.
To be a bit more on the nose than the other replies? Did you read past the headline at all?
This is perhaps one of the best examples of not just making a new standard, but making one implementation to rule them all and implementing all the competing standards.
> Patiently waiting for Linux to have one audio server that will offer compatibility to all previous ones
That's PipeWire!
> then slowly phasing them out until we have a consistent, possibly layered, low latency audio stack that scales from small single board computers to full sized audio workstations.
Does PipeWire have its own interface for applications yet? I haven't heard of it if so. That's good; let everything maranate for a while before trying to design the "one, true interface".
Pipewire is implementing Alsa, Pulse Audio and Jack already.
I don't think alsa is going away but I can see Pulse Audio and Jack being replaced by it quite quickly.
I don't think ALSA and Pipewire fill the same niche anyway? I was under the impression ALSA was an API for directly talking to the audio devices, which would be used by Pipewire or Pulseaudio or that in-house solution ChromeOS uses, or whichever flavor of mixer.
End-user applications talking directly to ALSA will probably become more uncommon, though.
I wonder, problems with PA emulating ALSA were AFAIK always blamed on apps abusing ALSA APIs. Does Pipewire do a better job of emulating ALSA than PA did, or does Pipewire just rely on the fixes installed to make those ALSA apps start to work OK on PA?
I'm not familiar with the details of this topic, sorry. I believe PA did (or still does) publish guidelines for a 'safe' subset of the ALSA API.
Don't know if Pipewire has published something similar, but I would guess there are corner cases in the ALSA API that wouldn't work well on top of Pipewire. E.g. the buffer rewinding, considering that Pipewire is explicitly designed around never doing that.
That would be great for new software, but what about older software that doesn't know about Pipewire? Can Pipewire mimick the presence of say Pulseaudio or Jack, so that one could safely uninstall them without breaking anything?
As an example, I installed Pipewire and other related packages, including pipewire-pulse, then proceeded to uninstall Pulseaudio, but the package manager complained that to continue I had to uninstall also gqrx-sdr, which is a software defined radio program that requires Pulseaudio as a dependency. In other words, to have everything working, I would still have to keep all old audio systems in place anyway, a scenario that reminds me that old xkcd cartoon about standards.
The Arch Wiki page describes some of the use cases for PipeWire: https://wiki.archlinux.org/title/PipeWire