Hacker News new | past | comments | ask | show | jobs | submit login
AudioGridder – DSP servers using general purpose networks and computers (github.com/apohl79)
60 points by apohl79 on April 22, 2020 | hide | past | favorite | 31 comments



This is fantastic. Would it be possible to make it cross platform? I like using Logic on my MacBook, but have an AMD 3900x sitting around too which I'm currently not using for audio. I was looking into running vsti's on it as a slave and just routing audio back and forth, but would it be possible to use it for distributed DSP directly?


Well, as AudioGridder is using JUCE, many parts of the code are cross platform already. But there is some parts of the network code as well as the screen capturing and keyboard/mouse code in the server, that is OSX specific. I would expect the efforts to add support for other platforms to be reasonable. So I hope there will be interested developers, that could have a look at this.


This is really cool! This is kind of a cloud CPU version of Universal Audio Apollo which offloads on local DSP chips.

What's the lowest latency achievable in practice? I'd be surprised if you can run anything under 50ms, so it's probably limited to mixing use-cases, which is already really cool because that tends to be the "plugin-heavy" phase.

How does it work in terms of licensing for the plugins that run remotely? I'm not familiar with common licenses used by plugins.


You can get below 50ms. But yes, the intention building this was mixing.

You can start with no additional buffering In AudioGridder. The additional latency on top of your configured I/O latency (based on the I/O buffer size in your DAW) will be the network RTT plus DSP processing time of one sample block.

The lower the latency the more fragile the setup will be. That’s why you can add additional buffering in AudioGridder.


Surprising that this makes sense rather than just doing the processing locally or on the GPU. What kinds of filters are people using that are this intensive?


GPUs may be used more and more for usage outside of graphics, but they don't lend themselves very well to audio, in particular recursive filters which are very common.


Nvidia has been working on this a lot lately. Check out CUDA Graphs. It's not there yet, but they're working on reducing kernel launch times. You still have the overhead of moving data to/from the GPU, but if you can come up with a complicated workflow that can run on the GPU it may be worth it.


yeah.. I think the unreliable scheduling also an issue with realtime dsp for audio on GPUs.

The only GPU using audio plugins I've seen were a couple of convolution reverb and "dynamic" convolution plugins (Nebula - I think they have dropped GPU processing since), and Nvidia has some realtime noise reduction thing running on the GPU aimed at gaming (RTX Voice - https://www.nvidia.com/en-us/geforce/guides/nvidia-rtx-voice...)


I have a virtual synth with such intricate discrete component modelling that it can bring a modern i7 to its knees all on it's own.


Very cool that this is possible. I wish it gave some numbers about best-case and typical latencies that are added by using this.


This requires experimentation. On my network I get around 1-5ms rtt. It’s a 1gb ethernet giving around 100mb/s throughput. So I’m _usually_ able to work with an I/O buffer size of 512 samples which is ~23ms at 44,1khz and no additional buffering in AudioGridder. Larger projects with many channels or complex plugin chains probably require larger buffer sizes.


Will someone please make an external general purpose DSP box (for Max/MSP, etc) similar to an external Thunderbolt GPU?


These have existed for a long time. They used to be called sound cards and maybe still are, I've been calling them audio interfaces for the last decade, and getting what you need in this space is an exercise in defining your actual requirements, because there is no standard for what's important and what's not. A good one can do wonders for latency when doing a lot of DSP.


That's not what parent is asking for (especially given the context). Sound cards don't perform "general purpose DSP" for "Max/MSP, etc", they focus on input, output, and closely related tasks.

I think the answer would essentially have to be a full-fledged computer running something like the AudioGridder server. Except ironically that still wouldn't help you run generic "Max/MSP, etc", it would only run the plugins supported by AudioGridder (VST3/AU). The scope would necessarily be limited by software, because that software usually expects to be running on your CPU.


There are a couple of audio interfaces that have the potential. UA interfaces have DSP chips that can run (sort of) general purpose stuff that could be leveraged by Max/MSP if the SDK were open (it may be, I haven't looked). Also, RME interfaces often have an FPGA in them which I've often thought would be a useful co-processor for audio, but I'm pretty sure that they aren't user programmable either.

Basically, the hardware has been around for ages, but the software is non existent/limited due to the vendors not fully realising the potential of the hardware, and reverse engineering this stuff is really hard!


> that software usually expects to be running on your CPU

It's implied that software would have to be rewritten to support this new device, like how all graphics software was rewritten to run on GPUs when they first appeared.


Can you describe a little better what this box's purpose would be (scenarios most relevant to the development becoming a commercial success), because a cursory glance at the real-time audio part of Max/MSP (and the Pd (Pure Data) fork/clone) suggests that specialized audio DSP hardware would not be beneficial, compared to using a CPU/distributing it over multiple cores, or potentially even a GPU.


Seconded! Wishlist:

• Absolutely minimal latency.

• Ability to run open source software.

• Stackability (to achieve high horsepower via multiple units)

• Multichannel digital i/o (analog is unnecessary, there are lots of great hardware AD and DA converter units)

• Wordclock

There's plenty of cycles in modern chips, but latency is a killer with native CPU processing.

Alternately, I'd love a RTOS which could run open source software on high-horsepower multi-core CPUs.


You should be able to use seL4 as a hypervisor and stuff a GNU/Linux system inside. The actual low-latency work would be done via native seL4 processes. It's proven to have hard latency bounds, thus being suitable for hard-realtime applications (except for modern x86_64 CPUs having special interrupts that can't be disabled, and thus possess the capability to introduce latency spikes of potentially unbounded duration). The HFT community found ways around those issues, however. It wouldn't be good enough to control a manned aircraft, but for entertainment-related audio, it should easily be good enough (those spikes are around a millisecond or so, iirc).


ever looked into Kapybara? i wonder if theyre still making product. it was a big ol bank of DSPs controlled by mac.

ah,here's the current model, Pacarana: http://www.symbolicsound.com/cgi-bin/bin/view/Products/Pacar...


Yes, this turned up in my search, and a general purpose version of it is exactly what I want.


These exist for AAX plugins and they're naturally pretty expensive and hard to program for as a general purpose unit.


Super cool!

I wonder how/if it handles latency compensation when used with other plugins.


It does take this into consideration and reports it to the DAW. So things stay in sync.


Reaper DAW comes bundled with Reamote, which has a similar scope


Does that work with AU/VST plugins or just the native reaper effects?


All 3, AU, VST and JSFX (the reaper effects). I've never used it but I've heard very complex plugins like Kontakt don't work in it? My info might be way out of date.


Awesome - can't wait to try it over the weekend.


really cool, excited to follow along, Have you tested in pro tools at all?


I was talking to avid. But their “business“ model does not seem to support open source. I have AAX support, but they do not provide me with the signing tools required to sign AAX plugins to get them loaded by protools production builds.


awesome, definitely keeping eyes on this one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: