At this year’s Game Developers Conference, voice director Michael Csurics presented a terrific talk called “VO Session Live: Ensemble Edition.” By soliciting the audience for volunteer voice actors, Csurics staged a live voice-over recording session that included both solo actor/performers and multiple actors running dialogue in a traditionally theatrical way. The demonstration served to reveal the ins-and-outs of recording voice dialogue, both from an artistic and technical standpoint. One of the portions of Csurics’ talk that I found particularly interesting was his exploration of the process of multiple takes in recording voice dialogue, and I thought his method might have some bearing on the process of recording multiple takes of a musician’s performance. In this blog, I’ll be going over the basics of his methodology, as he described it during his talk, and I’ll also be sharing some of my thoughts regarding how his working method intersects with my experiences recording session performances with live musicians.
For typical voice-over recording sessions, Csurics begins by having his voice performers record a short piece of dialogue at least two or three times. This process “gets them comfortable, gets them past the cold read and into an actual read.” During these voice sessions he will be assisted by an audio engineer running the Pro Tools session. Csurics will usually pause after the first two or three readings, sometimes giving direction and asking for additional takes. Once satisfied, he will then tell the audio engineer which take he liked, and the audio engineer will “pull down” this take. In other words, the engineer will select the portion of the recorded waveform that represents the final take and copy/paste it into a blank audio track directly below the track that is currently recording the vocal session. In this way, Csurics is able to save post-production time by making his final selections on-the-fly.
When I’m recording live musicians (either my own performances or those of others), my own workflow resembles Csurics’ process in some respects. I’ll have the backing music mix loaded into its own audio tracks in Pro Tools, and then I’ll set up a collection of blank audio tracks below the rest of the music mix. The number of blank tracks will largely depend on how many takes I expect will be required. For especially difficult parts in which a large number of takes are anticipated, I may set up anywhere from four to fifteen blank tracks. The musician (or myself, if that’s the case) will record a short section of the music, repeating that section for as many takes as may seem warranted. Each take will be recorded into one of the separate audio tracks that I’d set up prior to the recording session. Once recorded, I’ll mute that audio track, record-enable the next one and then record the next take. Once complete, I’ll have a fairly large assortment of takes that I can audition during post-production in order to edit together a version that will be used in the final recording. During post, I employ the same “pull down” method that Csurics described during his talk – i.e. copy/pasting the best version of each performance into a blank audio track at the bottom of my Pro Tools edit window.
I admire Csurics’ on-the-fly method of selecting the best take, but personally I’m only comfortable with making an instant decision if it pertains to simply discarding a take completely and trying again. In this case, I’ll delete the recording from the edit window, ensuring that I avoid any possibility of confusion later. Using this method for recording live instrumental performances gives me a large amount of flexibility in post, allowing me to assemble the most ideal musical performance possible. I can listen critically to every phrase in every take, selecting only the very best execution of every musical element for inclusion in the final recording. I talk more about some of the technical aspects of my workflow in my book, A Composer’s Guide to Game Music.
There were lots more details regarding voice dialogue tracking and editing in Csurics’ talk, which is available in the GDC Vault. If you don’t have access to the Vault, you can hear a lot of those same ideas in this great podcast interview with Csurics, recorded by the Bleeps ‘n’ Bloops podcast during last year’s GameSoundCon. The interview includes some interesting details about workflow, track naming, post editing and processing. I’ve embedded the SoundCloud recording of that podcast interview below. The section regarding voice-over session logistics starts at 11:40:
There’s also an earlier interview with Csurics posted on the Game Audio Network Guild site, which reflects on some of Csurics’ experiences as the Dialogue Supervisor for 2K Marin:
Interview with 2K Marin’s Dialogue Supervisor Michael Csurics
Since leaving 2K Games in 2012, Michael Csurics has worked as an independent contractor and now runs a game audio production services company called The Brightskull Entertainment Group (he can be reached on Twitter at @mcsurics). If you’d like to see and hear the great work going on at Brightskull, here’s a video demonstration from one of their latest projects, The Vanishing of Ethan Carter:
Great read and great advice Winifred! I often find myself combining the two methods, perhaps because I sometimes write and record live parts at the same time instead of using a MIDI scratch take (especially in guitar).