Myself and other sound designers were asked this month to participate in a beta/trial run for a new challenge called “Source Jam”, which is being run by Hugh Wielenga and Connor Staton.
All sound designers would be given the same original audio files and the challenge is to try and come up with a bunch of new sounds from that original source.
I figured I’d document some of what went into my contribution. I’ve been trying to do more “coffee”/warm-up jams this year as a way to ensure I’m creating something non-work related each day. And as just a means to keep fresh and try new things.
Below is a short example of one of the source files we were provided. These were just EMF recordings of various consumer electronics.
The main thing that stood out to me right away was that these were long extended performances and there was a lot of similar frequency content in them. You can get a lot of really cool source from EMF recordings, but personally I find without the recognizable sounds and without repetitive “performances” the sounds lose context after a while.
So my first thought was to try and create something new from these sounds, not just dump processing on them right away. And see if I could “perform” the sounds in some way to create more interesting envelopes and just get some expression and human life into the sounds.
For this, I created a simple Max patch. This uses a simple XY pad to blend between 4 different regions, these regions are mapped to volume controls for 4 different banks of sounds, and then there’s randomization happening within each bank. So basically as I’m able to move my controller around, I’d be mixing and blending the sounds together in different combinations.
I added some processing at the end (4 layers of EMF recordings still were just building up a bit too much).
After that, I was getting sounds that were already very different from the original files, and I think benefitted from adding a layer of human expression and performance.
Below is an example of what these gestures and performances were giving me out of Max.
After that I had big long sections of unique audio with some more manageable envelopes. Bringing this over into Reaper, I just did an aggressive dynamic split and spaced these random enveloped sections apart and then started processing further just to experiment and have some fun.
Notable thing here;
Eventide Instant Flanger has fantastic feedback
RX after OTT to try and reduce the maddness
NVK Doppler is doing great things (and making all that automation super easy)
So the result after that was …fine. Some turned out kinda cool, others were pretty forgettable.
It was definitely going for quantity over quality. Thanks to the heavy handed splitting of audio files and workflow tools (NVK) I made about 500-600 variations that were roughly similar, but because of the random source, were all a bit different.
Each time I felt I had stumbled on an interesting effect chain or settings, I’d just print big sections of this source through those settings. So I ended up without about 4-5 different versions of processing. But I wanted more modulation, more interesting envelopes, and more layering. I figured Tonsturm’s Whoosh would be a good starting point for this.
I exported the large printed sections and loaded them into Soundminer. Thanks to the Radium update you can autodetect regions within files and then drag and drop them into other tools. So rather than need to dynamic split again, export, and then drag the files into Tonsturm. I can just click and drag from Soundminer straight into Tonsturm, no need to create many different variations.
I’ve been really enjoying Output’s Portal plugin, so while Whoosh was creating further variations and randomizing the source samples, I was playing portal and adding more variation into it. I was also using Reaper’s audio modulation to drive the XY macros as well.
Honestly, I could’ve let this roll for days, there was a lot of great options coming out at this point. The hardest part was perspective and sample selection.
I created about 17 distinct different sound types from this processing, and then ended up choosing the 7 best ones for content submissions.
After this I wanted to try loading these into Serum to see if they made interesting wavetables or noise samples. The short answer is, yes they did.
This was obviously another rabbit hole which offered a bunch more options, so I ended up kind of staying in the same vein and went with more aggressive growl type stingers.
The final sounds are quite abstract as there was no context or direction, but it was a fun challenge to just focus purely on design and going where the sounds felt they wanted to go.
To be honest, I’m not sure how much value a crowdsource like this offers. The intended appeal is that by taking the time to create some variations yourself, you’ll get everyone’s variations and that is more rewarding. But I could see this creating a lot of random bloat in a sound library and potential risks with phasing depending on this initial source given.
This also puts an extra emphasis on effects processors (plugins, eurorack, etc.) which is only one portion of the sound design puzzle. It’s a great excuse to try out plugins that maybe have been collecting dust, but I worry among too many junior folks, it might steer them towards plugins and away from sound libraries, which could be money well spend when starting out.
However, learning how other people approached their sound design offers a ton of appeal and value (IMO), so that’s why I wanted to go through this process and document my effort.
If you’ve made it this far, thanks for following along. If you’re interested in the high quality versions of these sounds for yourself, just shoot me a message.
Final Result; 15 files containing 203 variations totaling 344MB.