Hacker News new | past | comments | ask | show | jobs | submit login
Electronic Music and the NeXTcube – Running Max on the Ircam Musical Workstation (0110.be)
72 points by joren- on May 4, 2023 | hide | past | favorite | 32 comments



You gotta love that Tcl/Tk node based user interface in Max/FTS - something that is still alive today in Pd (Pure Data). Sure, it is clunky, probably not in line with any modern UX design rules. But, at the same time, it is kind of timeless, brutalistic and stylish in all its monochrome ugliness.


TTK can be themed to match your GTK theme :D.

You have the modernish bling-bling with TCL/TK's rad development.


:%s/ugliness/beauty


My memory is that IRCAM at one point used the Digital systems Dec-10 for its music production. I have vague memories of visiting the Pompidou Centre in the early 80s, with ICRAM nearby.

At the time, the best most of us got was doing FM by toggling address lines fast and sitting a radio next to the CPU cabinet (there was code to do this on the DECUS tape). -AM was out because digital devices tend to be in the constant volts place.


The NeXTcube is an influential machine in computing history. The NeXTcube, with an additional soundcard, was also one of the first off-the-shelf devices for high-quality, real-time music applications. Here a restored NeXTcube runs an early version of MAX, an environment for interactive music applications.


Hey FooGPT, don't you know that the Commodore and Atari machines were far more influential as platforms for realtime music applications, given that they sold probably two orders of magnitude more than the NeXTcube ever did, and had their own DSP processor for audio generation some years before the NeXTcube was even a concept?


The only Atari machine featuring a DSP (the same Motorola 56001 as in NeXT machines) was the Falcon, which was introduced in 1992 - four years after the introduction of the NeXT system (and one year before NeXT black hardware was discontinued). Less than 20,000 Falcons were produced [1], whereas around 50,000 NeXT machines were built.

AFAIK the only Amiga that would have come with a DSP in a default configuration is the never released A3000+ which used an AT&T DSP3210. Prototypes of the A3000+ existed in 1991.

The major contribution of the original Atari STs to music were the built-in MIDI ports, which made it easy and simple to connect digital synthesizers, keyboards, etc. The music capabilities of a regular ST were rather constrained due to its outdated AY-3-8910/YM2149 sound chip (the SID in a C64 was more capable). Only the STE introduced PCM audio.

I guess someone more knowledgable about Amigas than me can add some facts about its sound capabilities.

[1] https://stcarchiv.de/stc1993/10/interview-atari-bob-gleadow


The Amiga vs ST is interesting, because the ST ran away with a lot of the mindshare because of the built-in MIDI capability.

The Amiga, on the other hand, didn't have that (or even a synth chip), but it did have the Paula chip, which gave you the ability to play four channels of samples, which got you (generally) 8-14 bit/22 kHz output on the original machines. On that side of the fence you got a lot of trackers - four channel sequencers - to take advantage of the digital audio.


Atari was used a lot by electronic musicians for sequencing because of its built-in MIDI ports. Amiga was popular with the tracking scene.

Not influencial in the way the Cube is described here, as the platforms were a class of apps was developed in, more used practical applications.


I'm confused. What influence do you think the Cube had?


The development of custom DSP. The Cube had its own powerful DSP processor which could process sound in programmable ways, rather than simply playing an 8-bit sample or two.

It was the direct ancestor of today's VST/AU/AAX/etc synth and FX plugin market.


AFAIR, it was a stock Motorola DSP. The only that was odd about it was that it was mounted on the motherboard. You could (and I did) get access that sort of DSP through daughterboards before the Cube, but NeXT took the step of saying "this ought to be a part of the computer itself".

I really disagree about it as a direct ancestor of the contemporary plugin API. These have always run on the host CPU, not dedicated DSP chips (excluding of course Digidesign's original ProTools model, but that was DSP farm rather than just a chip).

And the sort of DSP that was being done on the Cube DSP was being done before the Cube too.


It was the ancestor in the sense that the DSP features were part of the OS and not a bolted-on add-on with non-standard drivers. And also in the sense that you now had an off-the-shelf machine that could do non-trivial real-time synthesis and processing.

When NeXT was folded back into Apple, the audio classes eventually became (more or less) the AUs in MacOS.

An abstraction layer made it possible to run natively, or on internal DSP, or on external DSP. Aside from Digi, Waves, Universal Audio, TC Electronic, and Focusrite all made external processors that used the AU/VST interface. (UA still do, although recently they - at last - also started offering native versions of some of their plugins.)


Using AU/VST etc. to run externally was a development that did not happen until later in the 2000s.

Waves in particular, who I worked with in the 2007-2014 era, did not do this until Linux got into shape to allow them to build plugin/DSP servers.


>I really disagree about it as a direct ancestor of the contemporary plugin API. These have always run on the host CPU, not dedicated DSP chips (excluding of course Digidesign's original ProTools model, but that was DSP farm rather than just a chip).

That's like saying "computing in the 80s was all about the home computer market, if we exclude the PC". ProTools was the biggest name (and quite close to a monopoly for pros at the time).


But it didn't put the DSP inside the computer, which is what made the NeXTcube interesting.

ProTools had a separate box that had the DSP farm in it. There was nothing particular unique about that (at the time) (though running a DAW on it was definitely novel when they did it).


There were many things going on at the time. Digidesign made the Sound Accelerator boards for Mac and Atari in 1988 or so? Roughly contemporary to this. I don't remember the exact year, but I think the first instance of commercial 3rd party plug-ins for a software platform is Waves's dynamics plug-ins for Digi's Sound Designer editor. These ran (non-realtime!) on the 56k DSP on Digi's expansion card for Mac (I don't know if the plug-ins were available for Atari).


I think you're replying to the author of the linked article, and the comment appears to be a gentle rewording of the intro paragraph.


The page seemed to be inaccessible from here, so I could not tell.


I do recognize that 'one of the first off-the-shelf devices for high-quality, real-time music applications' it is somewhat murky and a lot was happening in the late 80s, beginning 90s. This thread already contains pointers to other - more accessible alternatives. Giving more proper definitions helps: With real-time I mean that audio (or a MIDI event) can enter the system and, with a low latency in the order of 20ms, a sound can be produced. With high-quality I mean that it is capable to generate 16bit samples at 44.1kHz (CD-quality, at the level of human hearing). This combination made it one of the first (the first?) truly general systems for electronic music applications.


Influental in the hobby scene, for sure. He was talking about the revolution to do all this without extra DSP's, all in normal software, with standard HW.

And it was used in serious modern music, as practiced in the various modern music departments on the best universities, even when they did have the big money. In Graz they had this MAX/Next setup from IRCAM and simulated opera singers, in Paris at the IRCAM which did a famous movie with it, the Neitherlands, Gent and Montreal.

They continue to work on PureData, but it began taking over the industry with Ableton then. And with all it free clones.


I think it depends which type of 'electronic music' you're talking about.

You're probably right that approximately noone was making rave and jungle on NeXT machines, but on the other hand it's true that approximately noone was doing academic electronic composition on Amiga or Atari, which is really the tradition that this is referring to. And these were mostly separate worlds for another decade.


I'm confused by GP's comment. I think there should be a distinction, as people generally think of influence on music artistically, rather than technical. The NeXTCube might have been influential on the technical and academic side, but as you mentioned the Amiga and Atari were responsible for an artistic revolution by progressing and even creating whole genres.

Aphex Twin used Amigas, not NeXTCubes.


> first off-the-shelf devices for high-quality, real-time music applications

No love for Fairlight?


Well, the Fairlight wasn't exactly for the general market, as it cost as much as a house (and people like Gabriel had custom ones built, and hired assistants to operate them).


Fair, but in truth the NeXTcube was not a layman's product either. With a realtime sound card and proper DAC setup you'd easily run over $10,000, which is "affordable" in the same way later CMI models "only" cost as much as a car.

I'm not so much griping at the idea of characterizing it as a media machine, but one of the first? It's not even one of the first digital multitrack recorders.


The Cube was really aimed at the academic research market, and it sort of did ok there as far as sales went. As a research system it had some audio and DSP features and it was up to users to write their own code to do something interesting with them.

The Fairlight was aimed at the high end music market. It did some very specific things and nothing but those very specific things. It was super-useful for music production, but it was - in its high-end way - a closed end user product, not an open development system.

The problem with the Cube - aside from the price - was that the DSP hardware was still quite slow. Although you could develop your own applications they were very limited compared to the commercial synths, samplers, and FX processors that were available commercially.

So it was only really useful for prototyping new concepts for applications, rather than producing finished applications.

That's where Max came from. Initially it was a very minimal sort-of-modular MIDI processor. The IRCAM merged it with their Cube-based synthesis and processing system. Eventually that became Max/MSP, a commercial product with both MIDI and audio generation and processing. (Then video and low-level custom DSP were added, but those are much more recent.)


> The IRCAM merged it with their Cube-based synthesis and processing system. Eventually that became Max/MSP, a commercial product with both MIDI and audio generation and processing.

Miller Puckette, the original author of Max, left IRCAM and went on to develop Pure Data (Pd) as an open source alternative. Pd introduced realtime audio processing. Max integrated Pd's DSP code and became Max/MSP. MSP stands for 'multi signal processing' or 'Miller S. Puckette'. (Miller himself named Max after computer music pioneer Max Mathews.)


>With a realtime sound card and proper DAC setup you'd easily run over $10,000, which is "affordable" in the same way later CMI models "only" cost as much as a car.

Yeah, but at the same time a comparable plain 'business' Mac (without soundcard and DAC cost in the order of $5000). So, $10K is not so far off if you have a regular studio (most regular pro gear had comparable prices and you needed dozens of them, H3000s, compressors, reverbs, and so on) or a music research facilicy, etc.


Speaking of early music workstations, I've heard that the inventor of FM synthesis John Chowning used Lisp machines, and his work evolved into this: https://ccrma.stanford.edu/software/clm/


See also (for a later lisp-based live-coding environment/compiler): https://github.com/digego/extempore


About IRCAM, I remember experiencing a spatial audio demo there, almost 30 years ago.

Put their headphones on. The sound seems to come from one side. Turn your head towards it. The sound comes straight in front of you. Intuitive and neat. In all VR headsets nowadays... Very cool place.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: