By Winifred Phillips | Contact | Follow
Welcome back to our three article series dedicated to collecting and exploring the ideas that were discussed in five different GDC 2017 audio talks about interactive music! These five speakers shared ideas they’d developed in the process of creating interactivity in the music of their own game projects. We’re looking at these ideas side-by-side to cultivate a sense of the “bigger picture” when it comes to the leading-edge thinking for music interactivity in games. In the first article, we looked at the basic nature of five interactive music systems discussed in these five GDC 2017 presentations:
- Always Be Composing: The Flexible Music System of ‘Plants vs. Zombies Heroes’ – Speaker: Becky Allen, Audio Director
- Different Approaches to Game Music (Audio Bootcamp XVI) – Speaker: Leonard J. Paul, Educator
- Epic AND Interactive Music in ‘Final Fantasy XV’ – Speaker: Sho Iwamoto, Audio Programmer
- Interactive Music Approaches (Audio Bootcamp XVI) – Speaker: Steve Green, Sound Designer
- The Sound of ‘No Man’s Sky’ – Speaker: Paul Weir, Audio Director
If you haven’t read part one of this article series, please go do that now and come back.
Okay, so let’s now contemplate some simple but important questions: why were those systems used? What was attractive about each interactive music strategy, and what were the challenges inherent in using those systems?
The Pros and Cons
In this discussion of the advantages and disadvantages of musical interactivity, let’s start with the viewpoint of Sho Iwamoto, audio programmer of Final Fantasy XV for Square Enix. He articulates a perspective on interactive music that’s rarely given voice in the game audio community. “So first of all,” Iwamoto says, “I want to clarify that the reason we decided to implement interactive music is not to reduce repetition.”
Those of us who have been in the game audio community for years have probably heard countless expert discussions of how crucial it is for video game composers to reduce musical repetition, and how powerful interactivity can be in eliminating musical recurrences in a game. But for Iwamoto, this consideration is entirely beside the point. “Repeating music is not evil,” he says. “Of course, it could be annoying sometimes, but everyone loves to repeat their favorite music, and also, repetition makes the music much more memorable.” So, if eliminating repetition was not at the top of Iwamoto’s list of priorities, then what was?
“We used (musical interactivity) to enhance the user’s emotional experience by playing music that is more suitable to the situation,” Iwamoto explains, also adding that he wanted “to make transitions musical, as much as possible.” So, if the best advantage of musical interactivity for Iwamoto was an enhanced emotional experience for gamers, then what was the biggest drawback?
For Iwamoto, the most awesome struggle arose from the desire to focus on musicality and melodic content, with the intent to present a traditionally epic musical score that maintained its integrity within an interactive framework. Often, these two imperatives seemed to smash destructively into each other. “At first it was like a crash of the epic music and the interactive system,” he says. “How can I make the music interactive while maintaining its epic melodies? Making music interactive could change or even screw up the music itself, or make the music not memorable enough.”
My perspective on epic interactive music
Sho Iwamoto makes a very good point about the difficulty of combining epic musicality with an interactive structure. For the popular LittleBigPlanet Cross Controller game for Sony Europe, I dealt with a very similar conundrum. The development team asked me to create an epic orchestral action-adventure track that would be highly melodic but also highly interactive. Balancing the needs of the interactivity with the needs of an expressive action-adventure orchestral score proved to be very tricky. I structured the music around a six-layer system of vertical layering, wherein the music was essentially disassembled by the music engine and reassembled in different instrument combinations depending on the player’s progress. Here’s a nine-minute gameplay video in which this single piece of music mutates and changes to accommodate the gameplay action:
Leonard J. Paul’s work on the platformer Vessel also hinged on a vertical layering music system. However, the biggest advantage of the vertical layering music system for Paul was in its ability to adapt existing music into an interactive framework. Working with multiple licensing agencies, the development team for Vessel was able to obtain a selection of songs for their game project while it was still early in development. The songs became rich sources of inspiration for the development team. “They had made the game listening to those songs so the whole entire game was steeped in that music,” Paul observes.
Nevertheless, the situation also presented some distinct disadvantages. “The licensing for those ten tracks took eight months,” Paul admits, then he goes on to describe some of the other problems inherent in adapting preexisting music for interactivity. “It’s really hard to remix someone else’s work so that it has contour yet it stays consistent,” Paul says, “So it doesn’t sound like, oh, I figured out something new in the puzzle or I did something wrong, just because there’s something changing in the music.” In order to make the music convey a single, consistent atmosphere, Paul devoted significant time and energy to making subtle, unnoticeable adjustments to the songs. “It’s very hard to make your work transparent,” Paul points out.
For sound designer Steve Green’s work on the music of the underwater exploration game ABZU, the main advantage of their use of an interactive music system was in the system’s ability to customize the musical content to the progress of the player by calling up location-specific tracks during exploration, without needing the make any significant changes to the content of those music files. “So its mainly not the fact that we’re changing the music itself as you’re playing it, we’re just helping the music follow you along,” Green explains. This enabled the music to “keep up with you as you’re playing the game, so it’s still interactive in a sense in that it’s changing along with the player.”
While this was highly-desirable, it also created some problems when one piece of music ended and another began, particularly if the contrast between the two tracks was steep. “The dilemma we faced was going in from track one to track two,” Green observes. For instance, if an action-oriented piece of music preceded a more relaxed musical composition, then “there was a high amount of energy that you just basically need to get in and out of.”
My perspective on interactive transitions
Steve Green makes a great point about the need for transitions when moving between different energy levels in an interactive musical score. I encountered a similar problem regarding disparate energy levels that required transitions when I composed the music for the Speed Racer video game (published by Warner Bros Interactive). During races, the player would have the option to enter a special mode called “Zone Mode” in which their vehicle would travel much faster and would become instantly invincible. During those sequences, the music switched from the main racing music to a much-more energetic track, and it became important for me to build a transition into that switch-over so that the change wouldn’t be jarring to the player. I describe the process in this tutorial video:
While sometimes a game audio team will choose an interactive music system strictly based on its practical advantages, there are also times in which the decision may be influenced by more emotional factors. “We love MIDI,” confesses Becky Allen, audio director for the Plants vs. Zombies: Heroes game for mobile devices. In fact, the development team, PopCap Games, has a long and distinguished history of innovative musical interactivity using the famous MIDI protocol. During the Plants vs. Zombies: Heroes project, MIDI was a powerful tool for the audio team. “It really was flexible, it was something you really could work with,” Allen says.
However, that didn’t mean that the MIDI system didn’t create some problems for the audio team. Early on in development for Plants vs. Zombies: Heroes, the team decided to record their own library of 24 musical instrument sounds for the game. But during initial composition, those instruments weren’t yet available. This led to an initial reliance on a pre-existing library (East West Symphonic Orchestra). “We were undergoing this sample library exercise, knowing that we’d be moving over to those samples eventually,” Allen says. Although the East West sample libraries had been initially used, they were fundamentally different. “Our PopCap sample library is fantastic too, but it’s totally different,” Allen adds. “So the sounds were not the same, and the music, even though they were the same cues, just felt wrong.” Allen advises, “I think it’s very important, if you can, to write to the sample library that you’ll be using ultimately at the end.”
For Paul Weir’s work on the space exploration game No Man’s Sky, the motivation to use a procedural music system was also partly influenced by emotional factors. “I really enjoy ceding control to the computer, giving it rules and letting it run,” Weir confides. But there were other motivating influences as well. According to Weir, the advantages of procedural music rest with its unique responsiveness to in-game changes. “Procedural audio, to make it different, to make it procedural, it has to be driven by the game,” Weir says. “What are you doing, game? I’m going to react to that in some way, and that’s going to be reflected in the sound I’m producing. In order to do that,” Weir adds, “it has to use some form of real-time generated sound.” According to Weir, “procedural audio is the creation of sound in real-time, using synthesis techniques such as physical modeling, with deep links into game systems.”
While this gives a procedural music system the potential to be the most pliable and reactive system available for modern game design, there are steep challenges inherent in its structure. “Some of the difficulties of procedural generated content,” Weir explains, “is to give a sense of its meaningfulness, like it feels like it’s hand crafted.” In a moment of personal reflection, Weir shares, “One of my big issues, is that if you have procedural audio, that the perception of it has to be as good as traditional audio. It’s no good if you compromise.”
So, for each of these interactive music systems there were distinct advantages and disadvantages. In the third and final article of this series, we’ll get down to some nitty-gritty details of how these interactive systems were put to use. Thanks for reading, and please feel free to leave your comments in the space below!
Winifred Phillips is an award-winning video game music composer whose most recent projects are the triple-A first person shooter Homefront: The Revolution and the Dragon Front VR game for Oculus Rift. Her credits include games in five of the most famous and popular franchises in gaming: Assassin’s Creed, LittleBigPlanet, Total War, God of War, and The Sims. She is the author of the award-winning bestseller A COMPOSER’S GUIDE TO GAME MUSIC, published by the MIT Press. As a VR game music expert, she writes frequently on the future of music in virtual reality games. Follow her on Twitter @winphillips.
Love your tips WInifred, what are your thoughts on using a mix of analog synths like the Moog with computer soft synths such as Zebra and Serum in video game production?
Hey, Ben! Glad you enjoyed the article! I think mixing up your sound palette with different kinds of synths can be very inspiring, depending on the project you’re working on. If the project supports this kind of approach, then go for it!
Pingback: Video game music systems at GDC 2017: tools and tips for composers | Composer Winifred Phillips