I was pleased to give a talk about composing music for games at the 2016 Game Developers Conference (pictured left). GDC took place this past March in San Francisco – it was an honor to be a part of the audio track again this year, which offered a wealth of awesome educational sessions for game audio practitioners. So much fun to see the other talks and learn about what’s new and exciting in the field of game audio! In this blog, I want to share some info that I thought was really interesting from two talks that pertained to the audio production side of game development: composer Laura Karpman’s talk about “Composing Virtually, Sounding Real” and audio director Garry Taylor’s talk on “Audio Mastering for Interactive Entertainment.” Both sessions had some very good info for video game composers who may be looking to improve the quality of their recordings. Along the way, I’ll also be sharing a few of my own personal viewpoints on these music production topics, and I’ll include some examples from one of my own projects, the Ultimate Trailers album for West One Music, to illustrate ideas that we’ll be discussing. So let’s get started!
When I talked about some basic techniques for achieving a more organic sound with virtual instruments, I didn’t mention any mixing considerations. Since this is a complex subject that goes far beyond the scope of a single blog, I’ll probably be returning to it several times… but let’s start with a general overview, and some thoughts about the orchestral recording environment. Mixing for a virtual orchestra can be a highly contentious subject, with controversy pursuing nearly every topic of conversation from reverb to volume levels to panning. It’s good to remember, though, that there is a pretty broad range of recording environments and mixing approaches in live orchestral tracks, which means that there can’t (and shouldn’t) be just one “correct” approach when working with virtual orchestras.
Some live orchestral recordings take the studio approach, in that they are fairly dry and close-mic’d in a small recording environment that’s buffered to eliminate acoustic artifacts, leaving only the original sound. Other orchestral recording sessions are clearly conducted in a very large space such as a concert hall, which gives the sensation of both distance and complex reverberation, reflections and tonal coloration caused by the unique properties of the space. Both the studio and the concert hall environments for orchestral recordings are entirely legitimate and afford the composer a set of advantages and disadvantages. The concert hall environment provides a richness of tone and texture from the acoustic properties of the room, but instruments in this space can sound distant and small performance details may not come through clearly. The studio approach allows the instruments of the orchestra to be captured with greater sonic detail and intimacy, but the dryness of the space may have a detrimental effect on the ability of the orchestral sections to blend properly, requiring artificial reverb to be applied during the mixing process.
What does this mean for virtual orchestras? Well, before we think about the recording environment that we’d like to simulate, we have to evaluate our orchestral sample libraries in terms of the environments in which they were originally recorded. Are they wet or dry? Some libraries are reverberant to the point of sounding dripping wet. Others are dry as a bone. This can make it difficult to use these libraries in tandem, but I usually don’t let this deter me. We can apply reverb to the dry instrumental samples so that they match the acoustic properties of the wet ones. I find that a process of trial-and-error can yield satisfying results here. However, there’s no way to completely remove the reverb from an orchestral library that was recorded wet… so what if our hearts were set on that intimate studio sound? Well, there are ways to address this issue. For instance, we can assume that our orchestral recording was captured in a large space, but that many microphones were positioned in tight proximity to the important players so that the subtle nuances of their performances would come through. When we layer our dry instruments with our wet ones, we can send some of the dry signal out for reverb processing (to account for the larger space) and mix those echoes and reflections with the reverb tail found naturally in the wet recordings. This will allow the dry instrument groups to sit in the larger space, but still feel intimate.
Now, what do we do about the orchestral sections that still feel purely wet? They’ll likely sound quite distant and washy. We can counteract this by layering dry instrumental soloists into these sections, sending their signal out for reverb processing as we did before. This can work very well for section leaders such as the first violin. When I’m applying this technique, I’ll sometimes evaluate the number of players in wet orchestral sections, and if it would be realistically feasible, I will boost this number by adding a dry chamber section. For instance, I might add a dry chamber violin section consisting of 4 players to a very wet 1st violin orchestral section consisting of 11 players. This will give me a resulting 1st violin section with fifteen players, which is large but not unreasonable. I’ll add some reverb to the dry instruments so that they’ll give the impression that they exist in the same space as the others, but that they are more closely mic’d.
These are just a few ideas on how to reconcile wet and dry instrumental recordings. As always, experimentation and close listening will be our best guide in determining if these techniques are achieving the desired results. In the future, I’ll talk a bit more about other mixing concerns, such as panning, level balancing, and mastering techniques. Hope you enjoyed this blog entry, and please share your own methods and questions in the comments below!