Video game music systems at GDC 2017: tools and tips for composers

Photo of video game composer Winifred Phillips, working in her music production studio on the music of the SimAnimals video game.

By video game composer Winifred Phillips | Contact | Follow

Welcome back to this three article series that’s bringing together the ideas that were discussed in five different GDC 2017 audio talks about interactive music!  These five speakers explored discoveries they’d made while creating interactivity in the music of their own game projects.  We’re looking at these ideas side-by-side to broaden our viewpoint and gain a sense of the “bigger picture” when it comes to the leading-edge thinking for music interactivity in games. We’ve been looking at five interactive music systems discussed in these five GDC 2017 presentations:

In the first article, we examined the basic nature of these interactive systems. In the second article, we contemplated why those systems were used, with some of the inherent pros and cons of each system discussed in turn.  So now, let’s get into the nitty gritty of tools and tips for working with such interactive music systems.  If you haven’t read parts one and two of this series, please go do so now and then come back:

  1. Video game music systems at GDC 2017: what are composers using?
  2. Video game music systems at GDC 2017: pros and cons for composers

Ready?  Great!  Here we go!

Continue reading

Video game music systems at GDC 2017: pros and cons for composers

Video game composer Winifred Phillips, pictured in her music production studio working on the music of LittleBigPlanet 2 Cross Controller

By Winifred Phillips | Contact | Follow

Welcome back to our three article series dedicated to collecting and exploring the ideas that were discussed in five different GDC 2017 audio talks about interactive music!  These five speakers shared ideas they’d developed in the process of creating interactivity in the music of their own game projects.  We’re looking at these ideas side-by-side to cultivate a sense of the “bigger picture” when it comes to the leading-edge thinking for music interactivity in games. In the first article, we looked at the basic nature of five interactive music systems discussed in these five GDC 2017 presentations:

If you haven’t read part one of this article series, please go do that now and come back.

Okay, so let’s now contemplate some simple but important questions: why were those systems used?  What was attractive about each interactive music strategy, and what were the challenges inherent in using those systems?

Continue reading

Video game music systems at GDC 2017: what are composers using?

By video game music composer Winifred Phillips | Contact | Follow

Video game composer Winifred Phillips, presenting at the Game Developers Conference 2017.The 2017 Game Developers Conference could be described as a densely-packed deep-dive exploration of the state-of-the-art tools and methodologies used in modern game development.  This description held especially true for the game audio track, wherein top experts in the field offered a plethora of viewpoints and advice on the awesome technical and artistic challenges of creating great sound for games. I’ve given GDC talks for the past three years now (see photo), and every year I’m amazed at the breadth and diversity of the problem-solving approaches discussed by my fellow GDC presenters.  Often I’ll emerge from the conference with the impression that we game audio folks are all “doing it our own way,” using widely divergent strategies and tools.

This year, I thought I’d write three articles to collect and explore the ideas that were discussed in five different GDC audio talks.  During their presentations, these five speakers all shared their thoughts on best practices and methods for instilling interactivity in modern game music.  By absorbing these ideas side-by-side, I thought we might gain a sense of the “bigger picture” when it comes to the current leading-edge thinking for music interactivity in games. In the first article, we’ll look at the basic nature of these interactive systems.  We’ll devote the second article to the pros and cons of each system, and in the third article we’ll look at tools and tips shared by these music interactivity experts. Along the way, I’ll also be sharing my thoughts on the subject, and we’ll take a look at musical examples from some of my own projects that demonstrate a few ideas explored in these GDC talks:

So, let’s begin with the most obvious question.  What kind of interactive music systems are game audio folks using lately?

Continue reading

VR Game Composer: Music Beyond the Virtual

Photo of video game music composer Winifred Phillips, from the article entitled "VR Game Composer: Music Beyond the Virtual."Welcome to the third installment in our series on the fascinating possibilities created by virtual reality motion tracking, and how the immersive nature of VR may serve to inspire us as video game composers and afford us new and innovative tools for music creation.  As modern composers, we work with a lot of technological tools, as I can attest from the studio equipment that I rely on daily (pictured left). Many of these tools communicate with each other by virtue of the Musical Instrument Digital Interface protocol, commonly known as MIDI – a technical standard that allows music devices and software to interact.

Image depicting VR apps from the article by Winifred Phillips, Game Music Composer.In order for a VR music application to control and manipulate external devices, the software must be able to communicate by way of the MIDI protocol – and that’s an exciting development in the field of music creation in VR!

This series of articles focuses on what VR means for music composers and performers. In previous installments, we’ve had some fun exploring new ways to play air guitar and air drums, and we’ve looked at top VR applications that provide standalone virtual instruments and music creation tools.  Now we’ll be talking about the most potentially useful application of VR for video game music composers – the ability to control our existing music production tools from within a VR environment.

We’ll explore three applications that employ MIDI to connect music creation in VR to our existing music production tools. But first, let’s take a look at another, much older gesture-controlled instrument that in ways is quite reminiscent of these motion-tracking music applications for VR:

Continue reading

Video Game Music Production Tips from GDC 2016

Game Composer Winifred Phillips during her game music presentation at the Game Developers Conference 2016I was pleased to give a talk about composing music for games at the 2016 Game Developers Conference (pictured left).  GDC took place this past March in San Francisco – it was an honor to be a part of the audio track again this year, which offered a wealth of awesome educational sessions for game audio practitioners.  So much fun to see the other talks and learn about what’s new and exciting in the field of game audio!  In this blog, I want to share some info that I thought was really interesting from two talks that pertained to the audio production side of game development: composer Laura Karpman’s talk about “Composing Virtually, Sounding Real” and audio director Garry Taylor’s talk on “Audio Mastering for Interactive Entertainment.”  Both sessions had some very good info for video game composers who may be looking to improve the quality of their recordings.  Along the way, I’ll also be sharing a few of my own personal viewpoints on these music production topics, and I’ll include some examples from one of my own projects, the Ultimate Trailers album for West One Music, to illustrate ideas that we’ll be discussing.  So let’s get started!

Continue reading

MIDI for the Game Music Composer: Wwise 2014.1

wwise-logo-empowers

MIDI seems to be making a comeback.

At least, that was my impression a couple of months ago when I attended the audio track of the Game Developers Conference.  Setting a new record for attendance, GDC hosted over 24,000 game industry pros who flocked to San Francisco’s Moscone Center in March for a full week of presentations, tutorials, panels, awards shows, press conferences and a vibrant exposition floor filled with new tech and new ideas. As one of those 24,000 attendees, I enjoyed meeting up with lots of my fellow game audio folks, and I paid special attention to the presentations focusing on game audio. Amongst the tech talks and post-mortems, I noticed a lot of buzz about a subject that used to be labeled as very old-school: MIDI.

This was particularly emphasized by all the excitement surrounding the new MIDI capabilities in the Wwise middleware. In October of 2014, Wwise released its most recent version (2014.1) which introduced a number of enhanced features, including “MIDI support for interactive music and virtual instruments (Sampler and Synth).” Wwise now allows the incorporation of MIDI that triggers either a built-in sound library in Wwise or a user-created one. Since I talk about the future of MIDI game music in my book, A Composer’s Guide to Game Music, and since this has become a subject of such avid interest in our community, I thought I’d do some research on this newest version of Wwise and post a few resources that could come in handy for any of us interested in embarking in a MIDI game music project using Wwise 2014.1.

The first is a video produced by Damian Kastbauer, technical audio lead at PopCap games and the producer and host of the now-famous Game Audio Podcast series.  This video was released in April of 2014, and included a preview of the then-forthcoming MIDI and synthesizer features of the new Wwise middleware tool.  In this video, Damian takes us through the newest version of the “Project Adventure” tutorial prepared by Audiokinetic, makers of Wwise.  In the process, he gives us a great, user-friendly introduction to the MIDI capabilities of Wwise.

Twitch-LostChocolate

border-159926_640_white

The next videos were produced by Berrak Nil Boya, a composer and contributing editor to the Designing Sound website.  In these videos, Berrak has taken us through some of the more advanced applications of the MIDI capabilities of Wwise, starting with the procedure for routing MIDI data directly into Wwise from more traditional MIDI sequencer software such as that found in a Digital Audio Workstation (DAW) application.  This process would allow a composer to work within more traditional music software and then directly route the MIDI output into Wwise.  Berrak takes us through the process in this two-part video tutorial:

border-159926_640_white

Finally, Berrak Nil Boya has created a video tutorial on the integration of Wwise into Unity 5, using MIDI.  Her explanation of the preparation of a soundbank and the association of MIDI note events with game events is very interesting, and provides a nicely practical application of the MIDI capability of Wwise.

MIDI in Wwise for the Game Music Composer: Peggle Blast

PeggleBlastBanner

In a previous blog post, we took a look at a few tutorial resources for the latest version of the Wwise audio middleware.  One of the newest innovations in the Wwise software package is a fairly robust MIDI system.  This system affords music creators and implementers the opportunity to avail themselves of the extensive adaptive possibilities of the MIDI format from within the Wwise application.  Last month, during the Game Developers Conference in the Moscone Center in San Francisco, some members of the PopCap audio development team presented a thorough, step-by-step explanation of the benefits of this MIDI capability for one of their latest projects, Peggle Blast.  Since my talk during the Audio Bootcamp at GDC focused on interactive music and MIDI (with an eye on the role of MIDI in both the history and future of game audio development), I thought that we could all benefit from a summation of some of the ideas discussed during the Peggle Blast talk, particularly as they relate to dynamic MIDI music in Wwise.  In this blog, I’ve tried to convey some of the most important takeaways from this GDC presentation.

PeggleBlastSession

“Peggle Blast: Big Concepts, Small Project” was presented on Thursday, March 5th by three members of the PopCap audio team: technical sound designer RJ Mattingly, audio lead Jaclyn Shumate, and senior audio director Guy Whitmore.  The presentation began with a quote from Igor Stravinsky:

The more constraints one imposes, the more one frees oneself, and the arbitrariness of the constraint serves only to maintain the precision of the execution.

This idea became a running theme throughout the presentation, as the three audio pros detailed the constraints under which they worked, including:

  1. A 5mb memory limit for all audio assets
  2. Limited CPU
  3. 2.5mb memory allocation for the music elements

These constraints were a result of the mobile platforms (iOS and Android) for which Peggle Blast had been built.  For this reason, the music team focused their attention on sounds that could convey lots of emotion while also maintaining a very small file size.  Early experimentation with tracks structured around the use of a music box instrument led the team to realize that they still needed to replicate the musical experience from the full-fledged console versions of the game.  A simple music-box score was too unsatisfying, particularly for players who were familiar with the music from the previous installments in the franchise.  With that in mind, the team concentrated on very short orchestral samples taken from the previous orchestral session recordings for Peggle 2.  Let’s take a look at a video from those orchestral sessions:

Using those orchestral session recordings, the audio team created custom sample banks that were tailored specifically to the needs of Peggle Blast, focusing on lots of very short instrument articulations and performance techniques including:

  1. pizzicato
  2. marcato
  3. staccato
  4. mallets

A few instruments (including a synth pad and some orchestral strings) were edited to loop so that extended note performances became possible, but the large majority of instruments remained brief, punctuated sounds that did not loop.  These short sounds were arranged into sample banks in which one or two note samples would be used per octave of instrument range, and note tracking would transpose the sample to fill in the rest of the octave.  The sample banks consisted of a single layer of sound, which meant that the instruments did not adjust their character depending on dynamics/velocity.  In order to make the samples more musically pleasing, the built-in digital signal processing capability of Wwise was employed by way of a real-time reverb bus that allowed these short sounds to have more extended and natural-sounding decay times.

wwise-logo-empowers

The audio team worked with a beta version of Wwise 2014 during development of Peggle Blast, which allowed them to implement their MIDI score into the Unity game engine.  The composer, Guy Whitmore, composed the music in a style consisting of whimsically pleasant, non-melodic patterns that were structured into a series of chunks.  These chunks could be triggered according to the adaptive system in Peggle Blast, wherein the music went through key changes (invariably following the circle of fifths) in reaction to the player’s progress.  To better see how this works, let’s watch an example of some gameplay from Peggle Blast:

As you can see, very little in the way of a foreground melody existed in this game.  In the place of a melody, foreground musical tones would be emitted when the Peggle ball hit pegs during its descent from the top of the screen.  These tones would follow a predetermined scale, and would choose which type of scale to trigger (major, natural minor, harmonic minor, or mixolydian) depending on the key in which the music was currently playing.  Information about the key was dropped into the music using markers that indicated where key changes took place, so that the Peggle ball would always trigger the correct type of scale at any given time.  The MIDI system did not have to store unique MIDI data for scales in every key change, but would instead calculate the key transpositions for each of the scale types, based on the current key of the music that was playing.

The presentation ended with an emphasis on the memory savings and flexibility afforded by MIDI, and the advantages that MIDI presents to game composers and audio teams.  It was a very interesting presentation!  If you have access to the GDC Vault, you can watch a video of the entire presentation online.  Otherwise, there are plenty of other resources on the music of Peggle Blast, and I’ve included a few below:

Inside the Music of Peggle Blast – An Interview with Audio Director Guy Whitmore

Peggle Blast!  Peg Hits and the Music System, by RJ Mattingly

Real-Time Synthesis for Sound Creation in Peggle Blast, by Jaclyn Shumate

PopCap’s Guy Whitmore Talks Musical Trials And Triumphs On Peggle Blast

 

GDC Audio Bootcamp

AudioBootCamp-Icon

The Game Developers Conference is nearly here!  It’ll be a fantastic week of learning and inspiration from March 2nd – March 6th.  On Tuesday March 3rd from 10am – 6pm, the GDC Audio Track will be hosting the ever-popular GDC Audio Bootcamp, and I’m honored to be an Audio Bootcamp speaker this year!

This will be the 14th year for the GDC Audio Bootcamp, and I’m honored to join the 9 other speakers who will present this year:

  • Michael Csurics, Voice Director/Writer, The Brightskull Entertainment Group
  • Damian Kastbauer, Technical Audio Lead, PopCap Games
  • Mark Kilborn, Audio Director, Raven Software
  • Richard Ludlow, Audio Director, Hexany Audio
  • Peter McConnell, Composer, Little Big Note Music
  • Daniel Olsén, Audio, Independent
  • Winifred Phillips, Composer, Generations Productions LLC
  • Brian Schmidt, Founder, Brian Schmidt Studios
  • Scott Selfon, Principal Software Engineering Lead, Microsoft
  • Jay Weinland, Head of Audio, Bungie Studios

We’ll all be talking about creative, technical and logistical concerns as they pertain to game sound.  My talk will be from 11:15am to 12:15pm, and I’ll be focusing on “Advanced Composition Techniques for Adaptive Systems.”

Bootcamp-Session-Twitter-Sm

Here’s a description of my Audio Bootcamp talk:

Interactive music technologies have swept across the video game industry, changing the way that game music is composed, recorded, and implemented. Horizontal Resequencing and Vertical Layering have changed the way that music is integrated in the audio file format, while MIDI, MOD and generative models have changed the landscape of music data in games.  With all these changes, how does the game composer, audio director, sound designer and audio engineer address these unique challenges?  This talk will present an overview of today’s interactive music techniques, including numerous strategies for the deployment of successful interactive music structures in modern games. Included in the talk: Vertical Layering in additive and interchange systems, how resequencing methods benefit from the use of digital markers, and how traditionally linear music can be integrated into an interactive music system.

Right after my Bootcamp presentation, all the Audio Bootcamp presenters and attendees will head off to the ever-popular Lunchtime Surgeries.  No, the attendees won’t actually be able to crack open the minds of the presenters and see what’s going on in there, but as a metaphor, it does represent the core philosophy of this lively event.  The Lunchtime Surgeries offer attendees a chance to sit with the presenters at large roundtables and ask lots of questions.  It’s one of the most popular portions of the bootcamp, and I’ll be looking forward to it!

Winifred-Phillips_GDC-Speaker

If you’ll be attending the GDC Audio Track, then I highly recommend the Audio Bootcamp on Tuesday, March 3rd.  Hope to see you there!

The Great MIDI Comeback?

I recently read a great article by Bernard Rodrigue of Audiokinetic in Develop Magazine, heralding the return of MIDI to the field of video game music.  It was a very well-written article, filled with hopeful optimism about the capability of MIDI to add new musical capabilities to interactive video game scores, particularly in light of the memory and CPU resources of modern games consoles.

It also reminded me strongly of another article I read, from 2010.

Four years ago, Microsoft Sound Supervisor West Latta wrote for Shockwave-Sound.com that “we may see a sort of return to a hybrid approach to composing, using samples and some form of MIDI-like control data… the next Xbox or Playstation could, in fact, yield enough RAM and CPU power to load a robust (and highly compressed) orchestral sample library.”

So, it seems that the game audio sector has been anticipating a return to MIDI for awhile now (I wrote at length about the history and possible future of MIDI in my book, A Composer’s Guide to Game Music).  The question is – has the current generation of video game consoles evolved to the point that a quality orchestral sample library could be loaded and used by MIDI within a modern video game?  So far, I haven’t come across an answer to this question, and it’s a very intriguing mystery.

Certainly, the availability of an orchestral sample library in a MIDI-based interactive video game score would depend on factors that are not all hinged to the technical specs of the hardware.  Would the development teams be willing to devote that amount of memory to a quality orchestral sample library?  As games continue to participate in a visual arms race, development teams devote available hardware horsepower to pixels and polygons… so, would the music team be able to get a big enough slice of that pie to make a high-quality orchestral MIDI score possible?

I’m keeping my eyes open for developments in this area. Certainly, the return of MIDI could be a game changer for composers of interactive music, but only if the musical standards remain high, both in terms of the music compositions and the quality of the instruments used within them. Let me know in the comments if you’ve heard any news about the great MIDI comeback!