Welcome! I’m game music composer Winifred Phillips, and just before the holidays I was ecstatic to learn that my music for the Jurassic World Primal Ops video game was nominated for a Society of Composers & Lyricists Award! In all the excitement following the announcement of the SCL Awards nominees, many budding game composers reached out to me for advice regarding their own career trajectories. I found myself referring many of them to articles I’ve written in this space over the years – articles covering the widely diverse topics that interest us as game composers.
Since 2014, this series of articles has explored the evolving state of our industry and the tools and techniques that can help us make great game music. Over time, these articles have become a fairly deep repository of information. After referring so many budding composers to articles in this lengthy series, it has occurred to me that this sizable collection has become quite difficult to navigate – partially due to the many topics that have been explored over the years.
Discussions have included many of the creative challenges that make our profession unique. Through an examination of the structure of interactive music systems, numerous dynamic composition techniques have been investigated. Along the way, we’ve pondered how game music composition has been accomplished in the past, and where it might be going in the future. A profusion of resources have been collated in these articles – including the best methods to find gigs, and awesome networking opportunities that can benefit a game composer’s career. There have also been examinations of resources that can keep us inspired and creatively energized.
Together, these articles constitute a living document about game music composition. However, they definitely need an index at this point. With that in mind, I’m offering this ‘big index’ of articles I’ve shared over the years, organized by subject matter. We can navigate around this index using the following menu:
On April 6th I was honored to give a lecture at the Thomas Jefferson Building of the Library of Congress in Washington DC (pictured right). As a video game composer, I’d been invited to speak by the Music Division of the Library of Congress. I’d be delivering the concluding presentation during their premiere event celebrating popular video game music. My lecture would be the very first video game music composition lecture ever given at the Library of Congress. I was both honored and humbled to accept the invitation and have my lecture included in the 2018-2019 season of concerts and symposia from the Library of Congress.
In my presentation, I included many topics that I’ve written about in previous articles. My lecture topics included horizontal resequencing, vertical layering, and interactive MIDI-based composition. I explored the various roles that music has played in famous games from the earliest days of game design (like Frogger and Ballblazer). I also discussed how music has been implemented in some of the awesome games from the modern era (like one of my own projects, Assassin’s Creed Liberation).
My lecture was supported by a full house in the Whittall Pavilion at the Library of Congress. The audience gave me both a warm welcome and lots of great questions following the conclusion of my lecture. Afterwards, the discussion continued during a book signing event that was kindly hosted by the Library of Congress shop. During the book signing event, I was pleased to sign copies of my book A Composer’s Guide to Game Music. I also got to talk personally with quite a few audience members. Such an engaging and insightful crowd! It was a pleasure getting to know these lovely people. I really enjoyed the lively conversation – I had the best time!!
Welcome back to this three article series that’s bringing together the ideas that were discussed in five different GDC 2017 audio talks about interactive music! These five speakers explored discoveries they’d made while creating interactivity in the music of their own game projects. We’re looking at these ideas side-by-side to broaden our viewpoint and gain a sense of the “bigger picture” when it comes to the leading-edge thinking for music interactivity in games. We’ve been looking at five interactive music systems discussed in these five GDC 2017 presentations:
In the first article, we examined the basic nature of these interactive systems. In the second article, we contemplated why those systems were used, with some of the inherent pros and cons of each system discussed in turn. So now, let’s get into the nitty gritty of tools and tips for working with such interactive music systems. If you haven’t read parts one and two of this series, please go do so now and then come back:
Welcome back to our three article series dedicated to collecting and exploring the ideas that were discussed in five different GDC 2017 audio talks about interactive music! These five speakers shared ideas they’d developed in the process of creating interactivity in the music of their own game projects. We’re looking at these ideas side-by-side to cultivate a sense of the “bigger picture” when it comes to the leading-edge thinking for music interactivity in games. In the first article, we looked at the basic nature of five interactive music systems discussed in these five GDC 2017 presentations:
Okay, so let’s now contemplate some simple but important questions: why were those systems used? What was attractive about each interactive music strategy, and what were the challenges inherent in using those systems?
Welcome to the third installment in our series on the fascinating possibilities created by virtual reality motion tracking, and how the immersive nature of VR may serve to inspire us as video game composers and afford us new and innovative tools for music creation. As modern composers, we work with a lot of technological tools, as I can attest from the studio equipment that I rely on daily (pictured left). Many of these tools communicate with each other by virtue of the Musical Instrument Digital Interface protocol, commonly known as MIDI – a technical standard that allows music devices and software to interact.
In order for a VR music application to control and manipulate external devices, the software must be able to communicate by way of the MIDI protocol – and that’s an exciting development in the field of music creation in VR!
This series of articles focuses on what VR means for music composers and performers. In previous installments, we’ve had some fun exploring new ways to play air guitar and air drums, and we’ve looked at top VR applications that provide standalone virtual instruments and music creation tools. Now we’ll be talking about the most potentially useful application of VR for video game music composers – the ability to control our existing music production tools from within a VR environment.
We’ll explore three applications that employ MIDI to connect music creation in VR to our existing music production tools. But first, let’s take a look at another, much older gesture-controlled instrument that in ways is quite reminiscent of these motion-tracking music applications for VR:
Welcome to part two of our ongoing exploration of some interesting possibilities created by the motion tracking capabilities of VR, and how this might alter our creative process as video game composers.
In part one we discussed how motion tracking lets us be awesome air guitarists and drummers inside the virtual space. In this article, we’ll be taking a look at how the same technology will allow us to make interesting music using more serious tools that are incorporated directly inside the VR environment – musical instruments that exist entirely within the VR ‘machine.’
Our discussion to follow will concentrate on three software applications: Soundscape, Carillon, and Lyra. Later, in the third article of this ongoing series, we’ll take a look at applications that allow our VR user interfaces to harness the power of MIDI to control some of the top music devices and software that we use in our external production studios. But first, let’s look at the ways that VR apps can function as fully-featured musical instruments, all on their own!
Let’s start with something simple – a step sequencer with a sound bank and signal processing tools, built for the mobile virtual reality experience of the Samsung Gear VR.
I got a chance to demo the Samsung Gear VR during the Audio Engineering Society Convention in NYC last year, and while it doesn’t offer the best or most mind-blowing experience in VR (such as what we can experience from products like the famous Oculus Rift), it does achieve a satisfying level of immersion. Plus, it’s great fun! The Soundscape VR app was built for Samsung Gear VR by developer Sander Sneek of the Netherlands. It’s a simple app designed to enable users to create dance loops using three instruments from a built-in electro sound library, a pentatonic step sequencer that enables the user to create rhythm and tone patterns within the loops, and a collection of audio signal processing effects that let the user warp and mold the sounds as the loops progress, adding variety to the performance.
Since I’ve been working recently on music for a Virtual Reality project (more info in the coming months), I’ve been thinking a lot about VR technology and its effect on the creative process. Certainly, VR is going to be a great environment in which to be creative and perform tasks and skills with enhanced focus, according to this article from the VR site SingularityHub. I’ve written in this blog before about the role that music and sound will play in the Virtual Reality gaming experience. It’s clear that music will have an impact on the way in which we experience VR, not only during gaming experiences, but also when using the tools of VR to create and be productive. With that in mind, let’s consider if the opposite statement may also be true – will VR impact the way in which we experience music, not only as listeners, but also as video game composers?
Simple VR technologies like the popular Google Cardboard headset can be a lot of fun – as I personally experienced recently (photo to the left). However, they offer only the rudimentary visual aspects, which omits some of the most compelling aspects of the VR experience. When motion tracking (beyond simple head movement) is added to the mix, the potential of VR explodes. Over the next three articles, we’ll be exploring some interesting possibilities created by the motion tracking capabilities of VR, and how this might alter our creative process. In the first article, we’ll have some fun exploring new ways to play air guitars and air drums in the VR environment. In the second article, we’ll take a look at ways to control virtual instruments and sound modules that are folded into the VR software. And finally, in the third article we’ll explore the ways in which VR motion tracking is allowing us to immersively control our existing real-world instruments using MIDI. But first, let’s take a look at the early days of VR musical technology!
I was pleased to give a talk about composing music for games at the 2016 Game Developers Conference (pictured left). GDC took place this past March in San Francisco – it was an honor to be a part of the audio track again this year, which offered a wealth of awesome educational sessions for game audio practitioners. So much fun to see the other talks and learn about what’s new and exciting in the field of game audio! In this blog, I want to share some info that I thought was really interesting from two talks that pertained to the audio production side of game development: composer Laura Karpman’s talk about “Composing Virtually, Sounding Real” and audio director Garry Taylor’s talk on “Audio Mastering for Interactive Entertainment.” Both sessions had some very good info for video game composers who may be looking to improve the quality of their recordings. Along the way, I’ll also be sharing a few of my own personal viewpoints on these music production topics, and I’ll include some examples from one of my own projects, the Ultimate Trailers album for West One Music, to illustrate ideas that we’ll be discussing. So let’s get started!
I recently read a great article by Bernard Rodrigue of Audiokinetic in Develop Magazine, heralding the return of MIDI to the field of video game music. It was a very well-written article, filled with hopeful optimism about the capability of MIDI to add new musical capabilities to interactive video game scores, particularly in light of the memory and CPU resources of modern games consoles.
Four years ago, Microsoft Sound Supervisor West Latta wrote for Shockwave-Sound.com that “we may see a sort of return to a hybrid approach to composing, using samples and some form of MIDI-like control data… the next Xbox or Playstation could, in fact, yield enough RAM and CPU power to load a robust (and highly compressed) orchestral sample library.”
So, it seems that the game audio sector has been anticipating a return to MIDI for awhile now (I wrote at length about the history and possible future of MIDI in my book, A Composer’s Guide to Game Music). The question is – has the current generation of video game consoles evolved to the point that a quality orchestral sample library could be loaded and used by MIDI within a modern video game? So far, I haven’t come across an answer to this question, and it’s a very intriguing mystery.
Certainly, the availability of an orchestral sample library in a MIDI-based interactive video game score would depend on factors that are not all hinged to the technical specs of the hardware. Would the development teams be willing to devote that amount of memory to a quality orchestral sample library? As games continue to participate in a visual arms race, development teams devote available hardware horsepower to pixels and polygons… so, would the music team be able to get a big enough slice of that pie to make a high-quality orchestral MIDI score possible?
I’m keeping my eyes open for developments in this area. Certainly, the return of MIDI could be a game changer for composers of interactive music, but only if the musical standards remain high, both in terms of the music compositions and the quality of the instruments used within them. Let me know in the comments if you’ve heard any news about the great MIDI comeback!
When I talked about some basic techniques for achieving a more organic sound with virtual instruments, I didn’t mention any mixing considerations. Since this is a complex subject that goes far beyond the scope of a single blog, I’ll probably be returning to it several times… but let’s start with a general overview, and some thoughts about the orchestral recording environment. Mixing for a virtual orchestra can be a highly contentious subject, with controversy pursuing nearly every topic of conversation from reverb to volume levels to panning. It’s good to remember, though, that there is a pretty broad range of recording environments and mixing approaches in live orchestral tracks, which means that there can’t (and shouldn’t) be just one “correct” approach when working with virtual orchestras.
Some live orchestral recordings take the studio approach, in that they are fairly dry and close-mic’d in a small recording environment that’s buffered to eliminate acoustic artifacts, leaving only the original sound. Other orchestral recording sessions are clearly conducted in a very large space such as a concert hall, which gives the sensation of both distance and complex reverberation, reflections and tonal coloration caused by the unique properties of the space. Both the studio and the concert hall environments for orchestral recordings are entirely legitimate and afford the composer a set of advantages and disadvantages. The concert hall environment provides a richness of tone and texture from the acoustic properties of the room, but instruments in this space can sound distant and small performance details may not come through clearly. The studio approach allows the instruments of the orchestra to be captured with greater sonic detail and intimacy, but the dryness of the space may have a detrimental effect on the ability of the orchestral sections to blend properly, requiring artificial reverb to be applied during the mixing process.
What does this mean for virtual orchestras? Well, before we think about the recording environment that we’d like to simulate, we have to evaluate our orchestral sample libraries in terms of the environments in which they were originally recorded. Are they wet or dry? Some libraries are reverberant to the point of sounding dripping wet. Others are dry as a bone. This can make it difficult to use these libraries in tandem, but I usually don’t let this deter me. We can apply reverb to the dry instrumental samples so that they match the acoustic properties of the wet ones. I find that a process of trial-and-error can yield satisfying results here. However, there’s no way to completely remove the reverb from an orchestral library that was recorded wet… so what if our hearts were set on that intimate studio sound? Well, there are ways to address this issue. For instance, we can assume that our orchestral recording was captured in a large space, but that many microphones were positioned in tight proximity to the important players so that the subtle nuances of their performances would come through. When we layer our dry instruments with our wet ones, we can send some of the dry signal out for reverb processing (to account for the larger space) and mix those echoes and reflections with the reverb tail found naturally in the wet recordings. This will allow the dry instrument groups to sit in the larger space, but still feel intimate.
Now, what do we do about the orchestral sections that still feel purely wet? They’ll likely sound quite distant and washy. We can counteract this by layering dry instrumental soloists into these sections, sending their signal out for reverb processing as we did before. This can work very well for section leaders such as the first violin. When I’m applying this technique, I’ll sometimes evaluate the number of players in wet orchestral sections, and if it would be realistically feasible, I will boost this number by adding a dry chamber section. For instance, I might add a dry chamber violin section consisting of 4 players to a very wet 1st violin orchestral section consisting of 11 players. This will give me a resulting 1st violin section with fifteen players, which is large but not unreasonable. I’ll add some reverb to the dry instruments so that they’ll give the impression that they exist in the same space as the others, but that they are more closely mic’d.
These are just a few ideas on how to reconcile wet and dry instrumental recordings. As always, experimentation and close listening will be our best guide in determining if these techniques are achieving the desired results. In the future, I’ll talk a bit more about other mixing concerns, such as panning, level balancing, and mastering techniques. Hope you enjoyed this blog entry, and please share your own methods and questions in the comments below!