Welcome back to our five-part discussion of some of the best techniques that video game composers can use to enhance tension and promote suspenseful gameplay. These articles are based on the presentation I gave at this year’s Game Developers Conference in San Francisco, entitled Homefront to God of War: Using Music to Build Suspense. If you haven’t read our previous discussion of Ominous Ambiences in part one of this series, please go check that article out.
Are you back? Good! Let’s continue!
We’ve already talked about how to create an edgy, ominous atmosphere. By carefully nurturing the player’s suspense and anxiety, we can prime the player with an assortment of quietly unnerving sounds, until the player is perfectly ready for…
The Jarring Jolt technique
This is the second technique we’ll be discussing in our five-part article series on the role of music in building suspense. Like the Ominous Ambience (which we discussed in part one), the Jarring Jolt also owes a debt to the expert work of sound designers. In fact, the Ominous Ambience and the Jarring Jolt are fairly interdependent. One doesn’t work that well without the other.
While much of Joonas’ talk focused on issues that would chiefly concern sound designers, there were several interesting points for game composers to consider. I’ll be exploring those ideas in this blog.
Joonas is a video game sound designer and voice actor working within the E-Studio professional recording studio in Helsinki, Finland. His game credits include Angry Birds Transformers, Broforce, and Nuclear Throne. After briefly introducing himself, Joonas launched into his talk about creating an aural environment that “feels good” and also makes the game “feel good” to the player. He starts by identifying an important consideration that should guide our efforts right from the start.
Consider design first
Joonas Turner, sound designer at E-Studio.
In his talk, Joonas urges us to first consider the overall atmosphere of the game and the main focus of the player. Ideally, the player should be able to concentrate on gameplay to the exclusion of any distractions. The sound of a game should complement the gameplay and, if possible, deliver as much information to the player as possible. If done perfectly, a player should be able to avoid consulting the graphical user interface in favor of the sonic cues that are delivering the same information. In this way, the player gets to keep attention completely pinned on the playing field, staying on top of the action at hand.
Clearly, sound effects are designed to serve this purpose, and Joonas discusses a strategy for maximizing the utility of sound effects as conveyors of information… but can music also serve this purpose? Can music deliver similar information to the player? I think that music can do this in various ways, by using shifts in mood, or carefully-composed stingers, or other interactive techniques. By way of these methods, music can let the player know when their health is deteriorating, or when they’re out of ammo. Music can signal the appearance of new enemies or the successful completion of objectives. In fact, I think that music can be as informative as sound design.
Music, sound design and voice-over: perfect together
As his GDC Europe talk proceeds, Joonas reminds us to think about how the music, sound design and voice-over will fit together within the overall frequency spectrum. It’s important to make sure that these elements will complement each other, with frequency ranges that spread evenly across the spectrum, rather than piling up together at the low or high end. With this in mind, Joonas suggests that the sound designer and composer should be brought together as early as possible to agree on a strategy for how these sonic elements will fit together in the game.
(Here’s where Joonas brought up the first of two controversial ideas he presented during his talk. While I’m not sure I agree with these ideas, I think the viewpoints he expresses are probably shared amongst other sound designers in the game industry, and therefore could use some more open discussion in the game audio community.)
While composers for video games always want to create the best and most awesome music for their projects, Joonas believes that this desire is not always conducive to a good final result. He suggests that the soundtrack albums for video games are often more exciting and musically pleasing than the actual music from the game. With this in mind, Joonas thinks that composers should save their best efforts for the soundtrack, while structuring the actual in-game music to be simpler and less aesthetically interesting. In this way, the music can fit more comfortably into the overall aural design.
Your sonic brand
At this point in his presentation, Joonas urges the attendees to find aural styles that will be unique to their games. He tells the audience to avoid using a tired sonic signature in every game, such as the famous brassy “bwah” tone that became pervasively popular after its use in the movie Inception. If you are wondering what that sounds like, just hit the button below (courtesy of web developer Dave Pedu).
In 2012, Gregory Porter (an avid movie lover and creator of YouTube videos about the movies) created a fun video illustrating just how pervasive the infamous Inception “bwah” had actually become:
In my book, A Composer’s Guide to Game Music, I discuss the concept of creating a unique sonic identity for game in the chapter about the “Roles and Functions of Music in Games.” In the book, I call this idea “sonic branding”(Chapter 6, page 112), wherein the composer writes such a distinctive musical motif or creates such a memorable musical atmosphere that the score becomes a part of the game’s brand.
When recording music or sound design for a project, Joonas tells us that it’s important to remain consistent with our gear choices. If a certain microphone has been used for a certain group of character voices, then that microphone should continue to be used for that purpose across the whole project. Likewise, the same digital signal processing applications or hardware (compression, limiting, saturation, etc) should be used across the entire game, so that the aural texture remains consistent. Carrying Joonas’ idea into the world of game music, we would find ourselves sticking with the same instrument and vocal microphones, and favoring the same reverb and signal processing settings throughout the musical score for a game. This would ensure that the music maintained a unified texture and quality from the beginning of the game to the end.
Shorter is better
In his talk, Joonas shares his personal experience with sound effects designed to indicate a successful action – a button press that causes something to happen. Joonas tells us that for these sounds, shorter is definitely better. The most successful sounds feature a quick, crisp entrance followed by a swift release. A short sound designed in this way will be satisfying to trigger, and won’t become tiresome after countless repetitions.
For the composer, the closest analogy to this sort of sound effect is the musical stinger designed to be triggered when the player performs a certain action. In order to adhere to Joonas’ philosophy, we’d compose these stingers to have assertive entrances and quick resolves, so that they would be fun for the player even when repeated many times.
To clip or not to clip…
(This is the second of the two controversial ideas Joonas presented in his talk. Again, while I don’t necessarily agree with this, I think it’s an idea that hasn’t been expressed often and may need further discussion.)
A volume unit (VU) meter registering some high audio levels.
The common wisdom amongst audio engineers is to avoid overloading the mix. Such overloads can produce clipping and create distortion, which deteriorates the overall sound quality of the game. However, Joonas suggests that for intense moments during gameplay, some clipping and distortion may actually enhance the sensation of anxiety and frenetic energy that such moments seek to elicit. According to Joonas, this enhancement can actually be a desirable outcome, and the sound designer should therefore not be afraid of such overloads and clipping during intense moments in a game.
How would this idea relate to music? Well, we’ve probably all heard examples of successful pop music that embraces sonic overload. Lead vocalists sometimes scream into microphones to produce overloads, or a wailing guitar riff may be recorded with lots of overload artifacts. As a deliberate effect placed carefully for the sake of drama, such brief moments of overload can add edginess to contemporary musical genres. However, we’ve all likely heard other examples of overloads that seem more the product of high decibel levels rather than any deliberate processing. It’s important to differentiate a deliberate effect from an accidental one. In music at least, we always want to control the final outcome of the mix, including the presence or absence of overload distortion.
Joonas wound up his talk by urging attendees to always give priority to the elements in the sound mix that are most important. That would be a good guiding principle for music mixing as well. Joonas is an interesting thinker in the area of game sound design. He can be followed at his Twitter account, @KissaKolme. Please feel free to comment below about anything you’ve read in this blog, and let me know how you feel about the ideas we’ve discussed. I’d love to read your thoughts!
Winifred Phillips is an award-winning video game music composer whose most recent project is the triple-A first person shooter Homefront: The Revolution. Her credits include five of the most famous and popular franchises in video gaming: Assassin’s Creed, LittleBigPlanet, Total War, God of War, and The Sims. She is the author of the award-winning bestseller A COMPOSER’S GUIDE TO GAME MUSIC, published by the Massachusetts Institute of Technology Press. As a VR game music expert, she writes frequently on the future of music in virtual reality video games. Follow her on Twitter @winphillips.
In a cool article for Ask Audio Magazine, G. W. Childs IV suggested 5 Unusual Things Every Sound Designer Should Try. These tactics were designed to shake the cobwebs off the creative process of sound designers, opening minds to new possibilities (including binaural recording techniques, plug-ins for randomizing audio content, adding reverb to dry audio sources by playing back the recordings in actual reverberant spaces, etc.)
In the spirit of that article, I’m going to offer 4 Unusual Things for a Game Composer to Try. If you’re a game composer, you can play with some of these techniques. I’m not going to say that you should, but if it sounds like fun, then go for it.
Use Sound Design Musically
One of the most energizing ways to get inspired is to use the actual aural building blocks of your game’s environment in a musical way. For instance, in the Speed Racer video game I incorporated lots of sound effects associated with the sport of racing into the music, including doppler effects that were worked into musical transitions, tire screeches mapped to the keyboard to accentuate their natural pitches so that they could be used harmonically, and crowd cheers worked into the rhythm section. These elements helped my music feel more connected to the game, and kept me invigorated as I worked.
Get Sneaky with your Genres
Lately, the genre mashup has become very popular, in which two disparate musical styles are layered together in order to produce a novel effect. Mashups can help keep a composer inspired, but even better — why not sneak that second genre into your track? There’s no reason for us to be overt about it, and hiding a second genre within the first can give a composer a sense of wicked enjoyment. For instance, while creating music for such bright and airy projects as Shrek the Third and SimAnimals, I worked in subtle avante garde orchestral approaches that included unusual meters and dissonance. The influences weren’t particularly overt, but they kept the composition process fresh and interesting for me and helped the music feel more unique.
Turn Tracks on their Heads
Some years back, I was involved in a project (which I will not name) that required me to take a large portion of music I had previously composed in one style and completely rework it into another style altogether. This was a thoroughly drastic change, from a light-hearted approach to a dour and heavy-handed instrumental treatment. The essential core elements of the track (meter, melody, tempo) had to remain the same, however. It was a challenging puzzle to solve, but it also opened me up to creative possibilities I wouldn’t have conceived any other way. In that spirit, if at any time we’re feeling creatively blocked while working on a track, maybe it might be fun to turn the track on its head — change its essential personality — while maintaining its skeletal structure.
Don’t Forget Your Old Gear
As our careers progress, we’re likely to build up a large assortment of high-tech equipment and state-of-the-art software tools. After a while, we become accustomed to our workflow with these tools, and there ceases to be any novelty to the creative process. At these times, it can be fun to drag out the old gear and put it to use. Not only can the vintage stuff add some needed retro zest to our sound palettes, but it can also reignite those creative juices by reminding us of the days when we were starting out and filled with starry-eyed yearning.
So that’s it — 4 Unusual Things for a Game Composer to Try. If any of it sounds like it might be helpful, then please give it a whirl! And let me know in the comments if you have any other unusual strategies for getting the creative juices flowing.