Miller Puckette, the original creator of Max and Pure Data, has been working on keeping connected remotely, too. In this video, he reveals how he plays with a percussionist using Pd and Ableton Live, then joins Cycling ’74’s David Zicarelli to talk about the future of collaboration in modular environments.
He works with Irwin and – this is about as overqualified a talk as you’ll ever get on this subject.
My musical collaboration with percussionist Irwin took an unplanned turn when we started working remotely. Over the past year we’ve developed a workflow that allows us to perform together in real time using instruments that I write in Pure Data but Irwin plays in Ableton Live, with audio, control, and video streams bouncing back and forth between our offices. The solutions we’ve found
are interesting both in how we deal with latency limitations and also in that the distinction between environments and pluggable modules has shifted, so that an entire software environment can pretend to be a module inside a different one.
Beware, something really glitchy happens to the sound 3 minutes in, though I absolutely would watch Miller as Max Headroom. It comes back when they start to play music. Then just skip ahead to – around 11:40.
Hey, if IRCAM and Miller are struggling with audio routing it gives the rest of us permission to have screwed some things up in the past twelve months, or so I might have heard happening to someone definitely not me.
Anyway, the tools being used here are all worth a mention:
Quacktrip and Netty NcNetface are network interfaces and also run as Pd patches, so they’re ideal for quick-and-dirty Pd patching. (This also helps if you have Ableton Live but not the latest Live Suite with Max for Live.)
Irwin is using these simple but elegant, sculptural pieces of wood with piezo.
In Pd, you get two big patches. There’s a nonlinear finite delay network (basically, think tons of delays being used to turn the impulse from the percussion into acoustic-sounding timbres, and you might want to read about delays and feedback in the Pd docs). And there’s “BELLO” – a 3F algorithm used as a filterbank, which also goes back to modeling theory. I’ll ask Miller if he wants to share more on the sound design aspect at some point; it’s clear in the video he expects an IRCAM-y crowd (and even some audio processing nerds might miss some of the history or context).
That research was done by famed composer and IRCAM legend Philippe Manoury. And after some digging, I found a reference to the 3F synthesis approach Miller is talking about. It’s in French, but here you are:
Whether or not you want to go down that particular rabbit hole, though, the important fundamental concept here is working with sound, streaming that sound, and applying audio-domain modifications to it. Keep the original playing local to the musician so they don’t hear it with latency, but then add the response with latency – that’s less of an issue. It’s also a great reminder of how nice it is to work with audio signal and not only control signal.
Feel like I’ve been looking at a lot of these sorts of diagrams lately, but here you go:
Some other Pd secrets here:
FUDI is one of my favorite things ever. The basic idea is, you send messages as strings delineated as semicolons, and … that’s it, actually. Someone nicely fleshed out the Wikipedia article on it:
The joke is, it’s Fast Universal Digital Interface – which in I guess in French sounds like a cute slang term for butt? Such is my recollection.
Anyway, this is not some fancy protocol like OSC. It’s just sending the simplest possible message over the network, which is often desirable – and deliciously easy to patch even for beginners.
The other secret sauce is Camomile, which now makes it pretty easy to wrap your Pd patch as a VST – VST3 – AU – LV2 plug-in for use in another host. That includes Live (Pd for Live), but other hosts, too, obviously, including on Linux.
There is a wealth of talks and concerts on IRCAM’s channel. The two parents of Max come together – Miller plus Cycling ’74 founder and Max/MSP creator David Zicarelli for a talk on “the future of music software” and collaboration.