Hey everyone! I’m video game composer Winifred Phillips. In the photo above I’m working on the project that launched my career as a game composer – God of War. Starting a viable career in the game development industry as a composer can be an awesome task, and I’m often asked for advice about how to break into this business. So each year I revisit the subject in an article that allows us to consider current ideas and strategies. Along the way, we contemplate multiple viewpoints, both from expert music and game audio practitioners and by anonymous game audio folks in community forums. This can be helpful, because the common wisdom on this subject changes in subtle but appreciable ways with each passing year. By revisiting the topic periodically, I hope that we’ll be able to obtain a deeper understanding of what it takes to land the coveted first gig as a composer of music for games.
Part of the reason I write this article each year is personal. My own “big break” story is so extraordinarily unusual that it can’t provide much useful guidance for newcomers. Being fortunate enough to have a famous game like God of War as your first game credit isn’t the typical entry path for a budding video game composer. Yet, because I’m a fairly visible member of the game audio community who has written a book called A Composer’s Guide to Game Music (pictured), I’m constantly asked for advice by aspiring composers who want to start their professional careers and are having trouble getting out of the gate. Since my own story is such a ‘bolt-of-lightning’ case study, I think it’s useful for us to study the more traditional entry paths when we’re trying to understand how aspiring game composers can get their start. By the way, in case you’re wondering, here’s the story of how I landed my first gig – I told the story during a Society of Composers and Lyricists event in NYC, and it’s captured in this video:
Hey everybody! I’m videogame composer Winifred Phillips. Every year, between working in my studio creating music for some awesome games, I like to take a little time to gather together some of the top online resources and guidance available for newbies in the field of video game music. What follows in this article is an updated and expanded collection of links on a variety of topics pertinent to our profession. We begin with the concert tours and events where we can get inspired by seeing game music performed live. Then we’ll move on to a discussion of online communities that can help us out when we’re trying to solve a problem. Next, we’ll see a collection of software tools that are commonplace in our field. Finally, we’ll check out some conferences and academic organizations where we can absorb new ideas and skills.
When I’m not at work in my studio making music for games, I like to keep up with new developments in the field of interactive entertainment, and I’ll often share what I learn here in these articles. Virtual reality is an awesome subject for study for a video game composer, and several of my recent projects have been in the world of VR. Since I’m sure that most of us are curious about what’s coming next in virtual reality, I’ve decided to devote this article to a collection of educational resources. I’ve made a point of keeping our focus general here, with the intent of understanding the role of audio in VR and the best resources available to audio folks. As a component of the VR soundscape, our music must fit into the entire matrix of aural elements, so we’ll spend this article learning about what goes into making expert sound for a virtual reality experience. Let’s start with a few articles that discuss methods and techniques for VR audio practitioners.
I’m pleased to announce that my book, A Composer’s Guide to Game Music, is now available its new paperback edition! I’m excited that my book has done well enough to merit a paperback release, and I’m looking forward to getting to know a lot of new readers! The paperback is much lighter and more portable than the hardcover. Here’s a view of the front and back covers of the new paperback edition of my book (click the image for a bigger version if you’d like to read the back cover):
As you might expect, many aspiring game composers read my book, and I’m honored that my book is a part of their hunt for the best resources to help them succeed in this very competitive business. When I’m not working in my music studio, I like to keep up with all the great new developments in the game audio field, and I share a lot of what I learn in these articles. Keeping in mind how many of my readers are aspiring composers, I’ve made a point of devoting an article once a year to gathering the top online guidance currently available for newcomers to the game music profession. In previous years I’ve focused solely on recommendations gleaned from the writings of game audio pros, but this time I’d like to expand that focus to include other types of resources that could be helpful. Along the way, we’ll be taking a look at some nuggets of wisdom that have appeared on these sites. So, let’s get started!
Welcome back to my series of blogs that collect some tutorial resources about game music middleware for the game music composer. I had initially intended to publish two blog entries on this subject, focusing on the most popular audio middleware solutions: Wwise and FMOD. However, since the Fabric audio middleware has been making such a splash in the game audio community, I thought I’d extend this series to include it. If you’d like to read the first two blog entries in this series, you can find them here:
Fabric is developed by Tazman Audio for the Unity game engine (which enables game development for consoles, PCs, mobile devices such as iOS and Android, and games designed to run within a web browser). Here’s a Unity game engine overview produced by Unity Technologies:
The first video was shot in 2013 during the Konsoll game development conference in Norway, and gives an overview of the general use of Fabric in game audio. The speaker, Jory Prum, is an accomplished game audio professional whose game credits include The Walking Dead, The Wolf Among Us, Broken Age, SimCity 4, Star Wars: Knights of the Old Republic, and many more.
Making a great sounding Unity game using Fabric
In the next two-part video tutorial, composer Anastasia Devana has expanded on her previous instructional videos about FMOD Studio, focusing now on recreating the same music implementation strategies and techniques using the Fabric middleware in Unity. Anastasia Devana is an award-winning composer whose game credits include the recently released puzzle game Synergy and the upcoming roleplaying game Anima – Gate of Memories.
Fabric and Unity: Adaptive Music in Angry Bots – Part 1
Fabric and Unity: Adaptive Music in Angry Bots – Part 2
In a previous blog post, we took a look at a few tutorial resources for the latest version of the Wwise audio middleware. One of the newest innovations in the Wwise software package is a fairly robust MIDI system. This system affords music creators and implementers the opportunity to avail themselves of the extensive adaptive possibilities of the MIDI format from within the Wwise application. Last month, during the Game Developers Conference in the Moscone Center in San Francisco, some members of the PopCap audio development team presented a thorough, step-by-step explanation of the benefits of this MIDI capability for one of their latest projects, Peggle Blast. Since my talk during the Audio Bootcamp at GDC focused on interactive music and MIDI (with an eye on the role of MIDI in both the history and future of game audio development), I thought that we could all benefit from a summation of some of the ideas discussed during the Peggle Blast talk, particularly as they relate to dynamic MIDI music in Wwise. In this blog, I’ve tried to convey some of the most important takeaways from this GDC presentation.
“Peggle Blast: Big Concepts, Small Project” was presented on Thursday, March 5th by three members of the PopCap audio team: technical sound designer RJ Mattingly, audio lead Jaclyn Shumate, and senior audio director Guy Whitmore. The presentation began with a quote from Igor Stravinsky:
The more constraints one imposes, the more one frees oneself, and the arbitrariness of the constraint serves only to maintain the precision of the execution.
This idea became a running theme throughout the presentation, as the three audio pros detailed the constraints under which they worked, including:
A 5mb memory limit for all audio assets
2.5mb memory allocation for the music elements
These constraints were a result of the mobile platforms (iOS and Android) for which Peggle Blast had been built. For this reason, the music team focused their attention on sounds that could convey lots of emotion while also maintaining a very small file size. Early experimentation with tracks structured around the use of a music box instrument led the team to realize that they still needed to replicate the musical experience from the full-fledged console versions of the game. A simple music-box score was too unsatisfying, particularly for players who were familiar with the music from the previous installments in the franchise. With that in mind, the team concentrated on very short orchestral samples taken from the previous orchestral session recordings for Peggle 2. Let’s take a look at a video from those orchestral sessions:
Using those orchestral session recordings, the audio team created custom sample banks that were tailored specifically to the needs of Peggle Blast, focusing on lots of very short instrument articulations and performance techniques including:
A few instruments (including a synth pad and some orchestral strings) were edited to loop so that extended note performances became possible, but the large majority of instruments remained brief, punctuated sounds that did not loop. These short sounds were arranged into sample banks in which one or two note samples would be used per octave of instrument range, and note tracking would transpose the sample to fill in the rest of the octave. The sample banks consisted of a single layer of sound, which meant that the instruments did not adjust their character depending on dynamics/velocity. In order to make the samples more musically pleasing, the built-in digital signal processing capability of Wwise was employed by way of a real-time reverb bus that allowed these short sounds to have more extended and natural-sounding decay times.
The audio team worked with a beta version of Wwise 2014 during development of Peggle Blast, which allowed them to implement their MIDI score into the Unity game engine. The composer, Guy Whitmore, composed the music in a style consisting of whimsically pleasant, non-melodic patterns that were structured into a series of chunks. These chunks could be triggered according to the adaptive system in Peggle Blast, wherein the music went through key changes (invariably following the circle of fifths) in reaction to the player’s progress. To better see how this works, let’s watch an example of some gameplay from Peggle Blast:
As you can see, very little in the way of a foreground melody existed in this game. In the place of a melody, foreground musical tones would be emitted when the Peggle ball hit pegs during its descent from the top of the screen. These tones would follow a predetermined scale, and would choose which type of scale to trigger (major, natural minor, harmonic minor, or mixolydian) depending on the key in which the music was currently playing. Information about the key was dropped into the music using markers that indicated where key changes took place, so that the Peggle ball would always trigger the correct type of scale at any given time. The MIDI system did not have to store unique MIDI data for scales in every key change, but would instead calculate the key transpositions for each of the scale types, based on the current key of the music that was playing.
The presentation ended with an emphasis on the memory savings and flexibility afforded by MIDI, and the advantages that MIDI presents to game composers and audio teams. It was a very interesting presentation! If you have access to the GDC Vault, you can watch a video of the entire presentation online. Otherwise, there are plenty of other resources on the music of Peggle Blast, and I’ve included a few below:
Instead of focusing on the yearly salaries/earnings of audio professionals, the survey concentrated on the money generated by the music/sound of individual projects. Each respondent could fill out the survey repeatedly, entering data for each game project that the respondent had completed during the previous year. The final results of the survey are meant to reflect how game audio is treated within different types of projects, and the results are quite enlightening, and at times surprising.
The financial results include both small-budget indie games from tiny teams and huge-budget games from behemoth publishers, so there is a broad range in those results. Since this is the first year that the GameSoundCon Game Audio Industry Survey has been conducted, we don’t yet have data from a previous year with which to compare these results, and it might be very exciting to see how the data shifts if the survey is conducted again in 2015.
Some very intriguing data comes from the section of the survey that provides a picture of who game composers are and how they work. According to the survey, the majority of game composers are freelancers, and 70% of game music is performed by the freelance composer alone. 56% of composers are also acting as one-stop-shops for music and sound effects, likely providing a good audio solution for indie teams with little or no audio personnel of their own.
A surprising and valuable aspect of the survey is to be found in the audio middleware results, which show that the majority of games use either no audio middleware at all, or opt for custom audio tools designed by the game developer. This information is quite new, and could be tremendously useful to composers working in the field. While we should all make efforts to gain experience with audio middleware such as FMOD and Wwise, we might keep in mind that there may not be as many opportunities to practice those skills as had been previously anticipated. Again, this data might be rendered even more meaningful by the results of the survey next year (if it is repeated), to see if commercial middleware is making inroads and becoming more popular over time.
Expanding upon this subject, the survey reveals that only 22% of composers are ever asked to do any kind of music integration (in which the composer assists the team in implementing music files into their game). It seems that for the time being, this task is still falling firmly within the domain of the programmers on most game development teams.
The survey was quite expansive and fascinating, and I’m very pleased that it included questions about both middleware and integration. If GameSoundCon runs the survey again next year, I’d love to see the addition of some questions about what type of interactivity composers may be asked to introduce into their musical scores, how much of their music is composed in a traditionally linear fashion, and what the ratio of interactive/adaptive to linear music might be per project. I wrote rather extensively on this subject in my book, and since I’ll also be giving my talk at GameSoundCon this year about composing music for adaptive systems, I’d be very interested in such survey results!
The GameSoundCon Game Audio Industry Survey is an invaluable resource, and is well worth reading in its entirety. You’ll find it here. I’ll be giving my talk on “Advanced Composition Techniques for Adaptive Systems” at GameSoundCon at the Millennium Biltmore Hotel in Los Angeles on Wednesday, October 8th.
Many thanks to Brian Schmidt / GameSoundCon for preparing this excellent survey!