Welcome to the third installment of my four-part article series on the core principles of music interactivity, including video demonstrations and supplementary supporting materials that take these abstract concepts and make them more concrete. In Part One of this series, we took a look at a simple example demonstrating the Horizontal Re-Sequencing model of musical interactivity, as it was used in the music I composed for the Speed Racer Videogame from Warner Bros. Interactive. Part Two of this series looked at the more complex Horizontal Re-sequencing music system of the Spore Hero game from Electronic Arts. So now let’s move on to another major music interactivity model used by video game composers – Vertical Layering.
Tag Archives: Pro Tools
Music Game Plan: Tactics for the Video Game Composer (Part Two)
Welcome back to my four-part article series presenting videos and helpful references to aid aspiring game music composers in understanding how interactive music works. In Part One of this series, we took a look at a simple example demonstrating the Horizontal Re-Sequencing model of musical interactivity, as it was used in the music I composed for the Speed Racer Videogame from Warner Bros. Interactive. Now let’s turn our attention to a more complex example of horizontal re-sequencing as demonstrated by the interactive music of the Spore Hero game from Electronic Arts.
Music Game Plan: Tactics for the Video Game Composer (Part One)
Interactive music is always a hot topic in the game audio community, and newcomers to game music composition can easily become confused by the structure and process of creating non-linear music for games. To address this issue, I produced four videos that introduce aspiring video game composers to some of the most popular tactics and procedures commonly used by game audio experts in the structuring of musical interactivity for games. Over the next four articles, I’ll be sharing these videos with you, and I’ll also be including some supplemental information and accompanying musical examples for easy reference. Hopefully these videos can answer some of the top questions about interactive music composition. Music interactivity can be awesome, but it can also seem very abstract and mysterious when we’re first learning about it. Let’s work together to make the process feel a bit more concrete and understandable!
Video Game Music Production Tips from GDC 2016
I was pleased to give a talk about composing music for games at the 2016 Game Developers Conference (pictured left). GDC took place this past March in San Francisco – it was an honor to be a part of the audio track again this year, which offered a wealth of awesome educational sessions for game audio practitioners. So much fun to see the other talks and learn about what’s new and exciting in the field of game audio! In this blog, I want to share some info that I thought was really interesting from two talks that pertained to the audio production side of game development: composer Laura Karpman’s talk about “Composing Virtually, Sounding Real” and audio director Garry Taylor’s talk on “Audio Mastering for Interactive Entertainment.” Both sessions had some very good info for video game composers who may be looking to improve the quality of their recordings. Along the way, I’ll also be sharing a few of my own personal viewpoints on these music production topics, and I’ll include some examples from one of my own projects, the Ultimate Trailers album for West One Music, to illustrate ideas that we’ll be discussing. So let’s get started!
VR for the Game Music Composer – Artistry and Workflow
Since the game audio community is abuzz with popular excitement about the impending arrival of virtual reality systems, I’ve been periodically writing blogs that gather together top news about developments in the field of audio and music for VR. In this blog we’ll be looking at some resources that discuss issues relating to artistry and workflow in audio for VR:
- We’ll explore an interesting post-mortem article about music for the VR game Land’s End.
- We’ll be taking a closer look at the 3DCeption Spatial Workstation.
- We’ll be checking out the Oculus Spatializer Plugin for DAWs.
Designing Sound for Virtual Reality
In these early days of VR, postmortem articles about the highs and lows of development on virtual reality projects are especially welcome. Freelance audio producer and composer Todd Baker has written an especially interesting article about the audio development for the Land’s End video game, designed for the Samsung Gear VR system.

Here, you see me trying out the Samsung Gear VR, as it was demonstrated on the show floor at the Audio Engineering Society Convention in 2015.
Todd Baker is best known for his audio design work on the whimsical Tearaway games, and his work as a member of the music composition team for the awesome LittleBigPlanet series. His work on Land’s End for Ustwo Games affords him an insightful perspective on audio for virtual reality. “In VR, people are more attuned to what sounds and feels right in the environment, and therefore can be equally distracted by what doesn’t,” writes Baker. In the effort to avoid distraction, Baker opted for subtlety in regards to the game’s musical score. Each cue began with a gentle fade-in, attracting little notice at first so as to blend with the game’s overall soundscape in a natural way.
AES Convention: What’s New on the Show Floor
This past weekend, the Audio Engineering Society held its annual North American convention in the Jacob Javits Center in New York City. I was participating as an AES speaker, but I also knew that AES includes an exhibit floor packed with the best professional audio equipment from all the top manufacturers, and I didn’t want to miss that! So, in between my game audio panel presentation on Saturday, and the Sunday tutorial talk I gave on the music system of the LittleBigPlanet franchise, I had the pleasure of searching the show floor for what’s new and interesting in audio tech. Here are some of the attractions that seemed most interesting for game audio folks:
One of the most interesting technologies on display at AES this year was Fraunhofer Cingo – an audio encoding technology developed specifically to enable mobile devices to deliver immersive sound for movies, games and virtual reality. Cingo was developed by the institute responsible for the MP3 audio coding format. According to Fraunhofer, the Cingo technology “supports rendering of 3D audio content with formats that add a height dimension to the sound image, such as 9.1, 11.1 or other channel combinations.” This enables mobile devices to emulate “the enveloping sound of movies, games or any other virtual environment.” While I was there, Fraunhofer rep Jennifer Utley gave me the chance to demo the Cingo technology using the Gear VR headset, which turns Samsung mobile phones into portable virtual reality systems. The sound generated by Cingo did have an awesome sense of spatial depth that increased immersion, although I didn’t personally notice the height dimension in the spatial positioning. Nevertheless, it was pretty nifty!
Workflow in Multiple Takes (for the Game Music Composer)
At this year’s Game Developers Conference, voice director Michael Csurics presented a terrific talk called “VO Session Live: Ensemble Edition.” By soliciting the audience for volunteer voice actors, Csurics staged a live voice-over recording session that included both solo actor/performers and multiple actors running dialogue in a traditionally theatrical way. The demonstration served to reveal the ins-and-outs of recording voice dialogue, both from an artistic and technical standpoint. One of the portions of Csurics’ talk that I found particularly interesting was his exploration of the process of multiple takes in recording voice dialogue, and I thought his method might have some bearing on the process of recording multiple takes of a musician’s performance. In this blog, I’ll be going over the basics of his methodology, as he described it during his talk, and I’ll also be sharing some of my thoughts regarding how his working method intersects with my experiences recording session performances with live musicians.
For typical voice-over recording sessions, Csurics begins by having his voice performers record a short piece of dialogue at least two or three times. This process “gets them comfortable, gets them past the cold read and into an actual read.” During these voice sessions he will be assisted by an audio engineer running the Pro Tools session. Csurics will usually pause after the first two or three readings, sometimes giving direction and asking for additional takes. Once satisfied, he will then tell the audio engineer which take he liked, and the audio engineer will “pull down” this take. In other words, the engineer will select the portion of the recorded waveform that represents the final take and copy/paste it into a blank audio track directly below the track that is currently recording the vocal session. In this way, Csurics is able to save post-production time by making his final selections on-the-fly.
When I’m recording live musicians (either my own performances or those of others), my own workflow resembles Csurics’ process in some respects. I’ll have the backing music mix loaded into its own audio tracks in Pro Tools, and then I’ll set up a collection of blank audio tracks below the rest of the music mix. The number of blank tracks will largely depend on how many takes I expect will be required. For especially difficult parts in which a large number of takes are anticipated, I may set up anywhere from four to fifteen blank tracks. The musician (or myself, if that’s the case) will record a short section of the music, repeating that section for as many takes as may seem warranted. Each take will be recorded into one of the separate audio tracks that I’d set up prior to the recording session. Once recorded, I’ll mute that audio track, record-enable the next one and then record the next take. Once complete, I’ll have a fairly large assortment of takes that I can audition during post-production in order to edit together a version that will be used in the final recording. During post, I employ the same “pull down” method that Csurics described during his talk – i.e. copy/pasting the best version of each performance into a blank audio track at the bottom of my Pro Tools edit window.
I admire Csurics’ on-the-fly method of selecting the best take, but personally I’m only comfortable with making an instant decision if it pertains to simply discarding a take completely and trying again. In this case, I’ll delete the recording from the edit window, ensuring that I avoid any possibility of confusion later. Using this method for recording live instrumental performances gives me a large amount of flexibility in post, allowing me to assemble the most ideal musical performance possible. I can listen critically to every phrase in every take, selecting only the very best execution of every musical element for inclusion in the final recording. I talk more about some of the technical aspects of my workflow in my book, A Composer’s Guide to Game Music.
There were lots more details regarding voice dialogue tracking and editing in Csurics’ talk, which is available in the GDC Vault. If you don’t have access to the Vault, you can hear a lot of those same ideas in this great podcast interview with Csurics, recorded by the Bleeps ‘n’ Bloops podcast during last year’s GameSoundCon. The interview includes some interesting details about workflow, track naming, post editing and processing. I’ve embedded the SoundCloud recording of that podcast interview below. The section regarding voice-over session logistics starts at 11:40:
There’s also an earlier interview with Csurics posted on the Game Audio Network Guild site, which reflects on some of Csurics’ experiences as the Dialogue Supervisor for 2K Marin:
Interview with 2K Marin’s Dialogue Supervisor Michael Csurics
Since leaving 2K Games in 2012, Michael Csurics has worked as an independent contractor and now runs a game audio production services company called The Brightskull Entertainment Group (he can be reached on Twitter at @mcsurics). If you’d like to see and hear the great work going on at Brightskull, here’s a video demonstration from one of their latest projects, The Vanishing of Ethan Carter:
A Composer’s Guide to Game Music – Vertical Layering, Part 2
Here’s part two of a four-part series of videos I produced as a supplement to my book, A Composer’s Guide to Game Music. This video demonstrates concepts that are explored in depth in my book, beginning on page 200. Expanding on Part One’s discussion of the Vertical Layering employed in The Maw video game, this video provides some visual illustration for the interactive music composition techniques that were implemented in the video game LittleBigPlanet 2: Toy Story.