Glad you’re here! I’m video game music composer Winifred Phillips. Today I’d like to share some news about one of my latest projects as a video game composer: the newest installment in an internationally-acclaimed fantasy RPG franchise known as The Dark Eye. During our discussion, we’ll break down the structure of one of the most important pieces of music I composed for that game.
The latest entry in the award-winning Dark Eye video game franchise will be released this coming Spring 2020 under the title The Dark Eye: Book of Heroes. Before we begin discussing this project and one of the pieces of music I composed for it, let’s take a look at the announcement trailer that was recently released by the publisher Ulisses Games. The trailer prominently features a sizable portion of the main theme I composed for the game:
As you can see from the gameplay captured in the trailer, The Dark Eye: Book of Heroes is an isometric real-time roleplaying game. The developers have compared the gameplay of Book of Heroes to top RPG games from the classic era like Baldur’s Gate and Neverwinter Nights. The game offers both solo missions and cooperative adventures designed for up to four players. Most importantly, the developers stress in an interview that their game will be faithful to the awesome fantasy world of the renowned RPG franchise – it will be “the most Dark Eye game ever.” Composing a main theme is a heavy responsibility, since main theme tracks tend to be regarded as especially important in a composer’s body of work. Just this week (Nov. 9th) I was interviewed on the Sound Of Gaming radio show on BBC Radio 3, and the main theme for The Dark Eye: Book of Heroes premiered on this broadcast, spotlighting my work as a game composer. The entire show is available to listen at this link from now until Dec. 8th. A main theme is not only a prominent showcase of a composer’s abilities, but also serves a crucial function within the main score of the game. So let’s explore that idea further.
At least, that was my impression a couple of months ago when I attended the audio track of the Game Developers Conference. Setting a new record for attendance, GDC hosted over 24,000 game industry pros who flocked to San Francisco’s Moscone Center in March for a full week of presentations, tutorials, panels, awards shows, press conferences and a vibrant exposition floor filled with new tech and new ideas. As one of those 24,000 attendees, I enjoyed meeting up with lots of my fellow game audio folks, and I paid special attention to the presentations focusing on game audio. Amongst the tech talks and post-mortems, I noticed a lot of buzz about a subject that used to be labeled as very old-school: MIDI.
This was particularly emphasized by all the excitement surrounding the new MIDI capabilities in the Wwise middleware. In October of 2014, Wwise released its most recent version (2014.1) which introduced a number of enhanced features, including “MIDI support for interactive music and virtual instruments (Sampler and Synth).” Wwise now allows the incorporation of MIDI that triggers either a built-in sound library in Wwise or a user-created one. Since I talk about the future of MIDI game music in my book, A Composer’s Guide to Game Music, and since this has become a subject of such avid interest in our community, I thought I’d do some research on this newest version of Wwise and post a few resources that could come in handy for any of us interested in embarking in a MIDI game music project using Wwise 2014.1.
The first is a video produced by Damian Kastbauer, technical audio lead at PopCap games and the producer and host of the now-famous Game Audio Podcast series. This video was released in April of 2014, and included a preview of the then-forthcoming MIDI and synthesizer features of the new Wwise middleware tool. In this video, Damian takes us through the newest version of the “Project Adventure” tutorial prepared by Audiokinetic, makers of Wwise. In the process, he gives us a great, user-friendly introduction to the MIDI capabilities of Wwise.
The next videos were produced by Berrak Nil Boya, a composer and contributing editor to the Designing Sound website. In these videos, Berrak has taken us through some of the more advanced applications of the MIDI capabilities of Wwise, starting with the procedure for routing MIDI data directly into Wwise from more traditional MIDI sequencer software such as that found in a Digital Audio Workstation (DAW) application. This process would allow a composer to work within more traditional music software and then directly route the MIDI output into Wwise. Berrak takes us through the process in this two-part video tutorial:
Finally, Berrak Nil Boya has created a video tutorial on the integration of Wwise into Unity 5, using MIDI. Her explanation of the preparation of a soundbank and the association of MIDI note events with game events is very interesting, and provides a nicely practical application of the MIDI capability of Wwise.
LittleBigPlanet 3 and Beyond: Taking Your Score to Vertical Extremes
I was honored to be selected by the Game Developers Conference Advisory Board to present two talks during this year’s GDC in San Francisco earlier this month. On Friday March 6th I presented a talk on the music system of the LittleBigPlanet franchise. Entitled “LittleBigPlanet 3 and Beyond: Taking Your Score to Vertical Extremes,” the talk explored the Vertical Layering music system that has been employed in all of the LittleBigPlanet games (the soundtrack for that game is available here). I’ve been on the LittleBigPlanet music composition team for six of their games so far, and my talk used many examples from musical compositions I created for all six of those projects.
After my talk, several audience members let me know that the section of my presentation covering the music system for the Pod menu of LittleBigPlanet 3 was particularly interesting – so I thought I’d share the concepts and examples from that part of my presentation in this blog.
That’s me, giving my GDC speech on the interactive music system of the LittleBigPlanet franchise. Here I’m just starting the section about the Pod menu music.
The audio team at Media Molecule conceived the dynamic music system for the LittleBigPlanet franchise. According to the franchise’s music design brief, all interactive tracks in LittleBigPlanet games must be arranged in a vertical layering system. I discussed this type of interactive music in a blog I published last year, but I’ll recap the system briefly here as well. In a vertical layering music system, the music is not captured in a single audio recording. Instead, several audio recordings play in sync with one other. Each layer of musical sound features unique content. Each of the layers represents a certain percentage of the entire musical composition. Played all together, we hear the full mix embodying the entire musical composition. Played separately, we hear submixes that are still satisfying and entertaining for their own sake. The music system can play all the layers either together or separately, or can combine the layers into different sets that represent a portion of the whole mix.
When implemented into gameplay, layers are often activated when the player moves into a new area. This helps the music to feel responsive to the player’s actions. The music seems to acknowledge the player’s progress throughout the game. It’s important to think about the way in which individual layers may be activated, and the functions that the layers may be called upon to serve during the course of the game.
In LittleBigPlanet 3, the initial menu system for the game is called “The Pod.” The music for the Pod is arranged in vertical layers that are activated and deactivated according to where the player is in the menu hierarchy. All the layers can be played simultaneously, and they play in multiple combinations… however, each of the individual layers is also associated with a specific portion of the menu system, and is activated when the player enters that particular part of the menu.
Let’s take a quick tour through the layers of the Pod menu music. I’ve embedded some short musical excerpts of each layer. You’ll find the SoundCloud players for each layer embedded below – just click the Play buttons to listen to each excerpt. The first layer of the Pod menu music is associated with the Main Menu, and it features some floaty, science-fiction-inspired textures and effects:
The next layer is associated with a menu labeled “My Levels,” and the music for that layer is very different. Now, woodwinds are accompanied by a gentle harp, combining to create a homey and down-to-earth mood:
Moving on to the music layer for the “Play” menu, we find that the instrumentation now features an ethereal choir and shimmering bells, expressing a much more celestial atmosphere:
Now let’s listen to the “Adventure” menu layer, in which plucked strings and bells combine to deliver a prominent melody line:
Finally, in the music layer associated with the “Community” and “Popit” menus, we hear a quirky mix of synths and effects that hearken back to menu music from previous games in the LittleBigPlanet franchise:
As the player navigates the Pod menu system, these various music layers are activated to correspond with the player’s location within the menu hierarchy. This sort of dynamic music triggering lies at the very heart of the Vertical Layering interactive music mechanism.
Every layer in a Vertical Layering composition can have a very distinct musical identity. When that layer is turned off, the entire mix changes in a noticeable way. The mix can be changed subtly…
… or it can be altered radically, with large scale activations or deactivations of layers. Even with these kinds of dramatic changes, the musical composition retains its identity. The same piece of music continues to play, and the player is conscious of continuing to hear the same musical composition, even though it has just altered in reaction to the circumstances of gameplay and the player’s progress.
In the Pod menu music system, the layers would change in reaction to the player’s menu navigation, which could be either slow and leisurely or brisk and purposeful. Layer activations and deactivations would occur with smooth crossfade transitions as the player moved from one menu to another. Now let’s take a look at a video showing some navigation through the Pod menu system, so we can hear how these musical layers behaved during actual gameplay:
As you can see, triggering unique musical layers for different portions of the menu system helps serve to define them. I hope you found this explanation of the Pod music to be interesting! If you attended GDC but missed my talk on the interactive music of LittleBigPlanet, you’ll be able to find the entire presentation posted as a video in the GDC Vault in just a few weeks. In the meantime, please feel free to add any comments or questions below!
I’d like to talk about a little personal milestone that just happened this week. My music video, “Little Big Planet 2 Soundtrack – Victoria’s Lab,” reached 200,000 views on YouTube. This is not an astounding view count – it isn’t viral by any means. However, it’s a lot more than I ever imagined when I first decided to create a music video for one of the songs I composed for the LittleBigPlanet 2 video game. I thought it might be interesting to talk briefly about how this video came about, and how the LittleBigPlanet game makes it possible to create a humorous music video like this one.
While there have been many other music videos made with the characters and creation tools of LittleBigPlanet 2, I think this one may the first (and perhaps the only) music video made by a LittleBigPlanet composer. The track, “Victoria’s Lab,” was the first I’d composed for the LittleBigPlanet franchise, and I was tremendously excited about it. The track included an ambitious vocal arrangement for four singers. I sung all the parts in this fugato, which is a composition in which multiple independent melodies play simultaneously. They echo each other’s melodic content and then branch off into lots of variations. While this was essentially a serious vocal composition style, I performed it in a whimsical way using syllables such as “la dee dah.” The whole thing was supported by an accompaniment that included string orchestra, circus organ, beat boxing, rock guitar, vocoder, and lots of other odd and eccentric instruments.
The real fun of a vocal composition like this one is watching it performed live. If you’ve ever seen this kind of vocal counterpoint performed live, you know how interesting it is to watch the melodies shifting from one vocalist to the next, while the others sing independent and related parts. I had this desire to create a visual experience for my Victoria’s Lab fugato… but how? I really didn’t think that anyone would want to watch me in splitscreen, overdubbing vocal parts into a microphone – that would be boring. The LittleBigPlanet aesthetic and sense of humor heavily influenced this composition, so wouldn’t it be more fun to see a group of Sackgirls singing together?
The developer of the LittleBigPlanet game, Media Molecule, did a wonderful job of creating both a fantastically entertaining gameplay experience, and a imaginative and inspiring creation tool for making game levels. Moreover, they also made it entirely possible to create short entertaining films with LittleBigPlanet, too. Characters are called “Sackbots” and have the ability to lip sync as you speak into a microphone connected to the PS3. For my music video, I created and dressed up three Sackbot singers. Then I played back recordings of each of the vocal parts into the PS3’s microphone. I played them one at a time, isolated from each other and from the rest of the composition. While each of the Sackbot singers recorded their individual parts for lip sync, I moved the Sackbot’s head and body using the PS3 controller, animating the Sackbot to give it a more realistic “performance.” I recorded each of their dramatic singing performances against a green screen backdrop, so that I could put them into any environment I liked. Sometimes I had them singing on a theatrical stage. Other times, they sang in square frames on screen, Brady-Bunch-style. Finally, I dressed up a Sackbot singer to look just like the character of Victoria Von Bathysphere from the LittleBigPlanet 2 game. Victoria got to sing the operatic, aria-like parts, which she performed with intensity and dramatic flair.
Since I couldn’t leave Sackboy himself out of the fun, I created scenes in which he vigorously headbanged and rocked out to the music. I had him dancing alongside large skeletons playing guitars. Finally, I let him run around the delightfully wacky environments created by Media Molecule for the Victoria’s Lab levels, where this music is actually heard in the LittleBigPlanet 2 video game. The levels created by Media Molecule are pure genius – a combination of joyous silliness and sublime artistry that come together to form the perfectly delightful playground for Sackboy and all his friends.
I recorded all these performances and action sequences using the Hauppauge PVR system, which allows PS3 video to be fed into a computer and captured as video files. Then I edited the video in my computer using Final Cut Pro.
It was a bigger job than I thought it would be, but I had a great time creating the music video. I’m very happy that people have enjoyed my singing Sackgirls and headbanging Sackboy. 200,000 views may not be huge by YouTube standards, but it certainly makes me smile to think of that many people watching my Sackgirls sing. Making the music video was a great way for me to participate in the LittleBigPlanet philosophy of Play, Create, Share.
Over at the Designing Sound website, there’s an interesting article about loudness metering in video game middleware. While reading the article, I paused on a particular phrase: loudness war. The music industry is painfully familiar with this form of sonic warfare. It’s been an ongoing conflict for years, based on a pretty vicious cycle of reasoning that goes like this:
We want our music to be exciting.
Listeners get most excited when the music gets loud.
Therefore, we want our music mixes to feel loud.
Other people are mixing their music to be more exciting by being louder than ours.
To counteract this, we have to make our music as loud as theirs, or better yet, louder.
Like an audio arms race, each side piles on the volume until everything overloads, then squashes their mixes with massive compression in order to avoid clipping. This leads to painfully flattened music that explodes from the speakers like a tsunami of sound, always pouring forth, never ebbing. It’s the ultimate apocalypse of the loudness war.
But does it actually feel loud?
At the Electronic Entertainment Expo, there’s another kind of loudness war about to break out once again in a couple of weeks. The big game publishers mount some of the showiest booths each year, complete with enormous video screens, stacks of speakers and rumbling subwoofers installed under the floor. The result can feel pretty thunderous. The noise levels were so migrane-inducing that in 2006 the Electronic Software Association began enforcing a cap on the loudness levels, with fines levied against the transgressors. Nevertheless, from my perspective as an annual attendee, the war seems to rage on. Entering one of the big E3 booths sometimes makes me feel like the “Maxell Man” from the classic commercial in which the power of sound is portrayed like the driving force of a wind tunnel. It hits me with a solid whomp, and I can almost imagine it whipping my hair back. Whoosh.
Despite this, I find that after only a little while wandering the expo floor, none of the sounds feel particularly powerful or impressive anymore. Sure, it’s all still quite loud, but it’s also all the same kind of loud. Likewise with the sonic arms race in the music industry – when every song strives to push itself into listener’s faces with aggressive sonic intensity throughout, it becomes numbing. Our senses deaden, and the music blends into a loud but featureless white noise.
How does this relate to our work as game composers, sound designers and audio engineers? The article at Designing Sound briefly discussed the lack of clear standards for levels of loudness in video games, and the loudness war that has resulted as game developers attempt to outdo each other in making their games sonically exciting. Like the big booths at E3, we’re all trying to shout each other down. As a game composer, I’m always aware of the place my music occupies in the sonic landscape of the game. If the game is loud, should my music be loud too? And will this turn into a mush of sonic white noise? How can I prevent this, and still create that level of excitement that is the holy grail of the loudness war?
My brilliant music producer Winnie Waldron has a saying when it comes to this subject. Since we tend to communicate in our own shorthand, I’ll need to translate it for you. When talking about loudness, she says, “Contast, contrast, contrast! Yin and yang, yin and yang!!” What does this mean? We only know a thing if we also know its opposite. Therefore, we only recognize something as loud when we have experienced something soft in close proximity to it. This is also true for all manner of musical effects, from tempos and pitch elevation to texture and rhythmic devices, but for loudness its particularly meaningful. In order to create an epically loud moment, we have to preface it with something hushed. Really effective popular music uses this phenomenon by incorporating unexpected pauses, stutters and breakdowns that challenge expectations and create those hushed moments. In game audio, moments of stillness can be our subtle weapons in the loudness war.