LittleBigPlanet 3 and Beyond: Taking Your Score to Vertical Extremes

1st-Slide_FF-and-LBP-session

Yesterday I shared some info about my upcoming Audio Bootcamp presentation on Tuesday March 3rd at the Game Developers Conference in San Francisco — and today I’d like to share some information about the second presentation I’ll be giving during the main conference. On Friday, March 6th at 10am, I’ll be giving an Audio Track presentation at the Game Developers Conference – I’ll have the pleasure of talking about the interactive music system of the LittleBigPlanet franchise.  Here is the official description of my conference session from the GDC 2015 Schedule:

SackNotes

“LittleBigPlanet 3 and Beyond: Taking Your Score to Vertical Extremes” presents down-to-earth strategies for the design and utilization of a vertical layering music system. Composer Winifred Phillips’ credits include six LittleBigPlanet games (LittleBigPlanet 3, LittleBigPlanet 2, LittleBigPlanet Vita, LittleBigPlanet Cross Controller, LittleBigPlanet Karting, LittleBigPlanet Toy Story). Phillips will discuss her music from the LittleBigPlanet franchise — a series that features one of the most complex vertical layering systems in the field of game audio. Intense challenges often lead to inventive solutions. By virtue of the extreme example embodied by the LittleBigPlanet system, Phillips will share the simple approaches that solved some of the common problems associated with vertical construction. This discussion will be augmented by musical examples from a dozen interactive compositions that Phillips created for LittleBigPlanet games. Attendees will learn techniques to avoid problems in any vertical layering system, regardless of whether that system is simple or extreme.

Takeaway

Through detailed examples from the LittleBigPlanet franchise, Phillips will provide a step-by-step analysis of the process that resulted in a tightly-constructed, six-layer interactive music system. This discussion will provide attendees with practical knowledge that can be applied to their own projects.

Intended Audience

This session is for anyone interested in game scoring, interactive music systems and game music implementation strategies. Simple approaches to vertical layering will be accessible to attendees at all levels, while more advanced attendees will appreciate the innovative solutions applied to the complex vertical music system of the LittleBigPlanet franchise.

So, if you’ll be attending GDC in San Francisco on March the 6th, I hope you’ll come to my session!

LBP3-Session-Twitter-Sm

GDC Audio Bootcamp

AudioBootCamp-Icon

The Game Developers Conference is nearly here!  It’ll be a fantastic week of learning and inspiration from March 2nd – March 6th.  On Tuesday March 3rd from 10am – 6pm, the GDC Audio Track will be hosting the ever-popular GDC Audio Bootcamp, and I’m honored to be an Audio Bootcamp speaker this year!

This will be the 14th year for the GDC Audio Bootcamp, and I’m honored to join the 9 other speakers who will present this year:

  • Michael Csurics, Voice Director/Writer, The Brightskull Entertainment Group
  • Damian Kastbauer, Technical Audio Lead, PopCap Games
  • Mark Kilborn, Audio Director, Raven Software
  • Richard Ludlow, Audio Director, Hexany Audio
  • Peter McConnell, Composer, Little Big Note Music
  • Daniel Olsén, Audio, Independent
  • Winifred Phillips, Composer, Generations Productions LLC
  • Brian Schmidt, Founder, Brian Schmidt Studios
  • Scott Selfon, Principal Software Engineering Lead, Microsoft
  • Jay Weinland, Head of Audio, Bungie Studios

We’ll all be talking about creative, technical and logistical concerns as they pertain to game sound.  My talk will be from 11:15am to 12:15pm, and I’ll be focusing on “Advanced Composition Techniques for Adaptive Systems.”

Bootcamp-Session-Twitter-Sm

Here’s a description of my Audio Bootcamp talk:

Interactive music technologies have swept across the video game industry, changing the way that game music is composed, recorded, and implemented. Horizontal Resequencing and Vertical Layering have changed the way that music is integrated in the audio file format, while MIDI, MOD and generative models have changed the landscape of music data in games.  With all these changes, how does the game composer, audio director, sound designer and audio engineer address these unique challenges?  This talk will present an overview of today’s interactive music techniques, including numerous strategies for the deployment of successful interactive music structures in modern games. Included in the talk: Vertical Layering in additive and interchange systems, how resequencing methods benefit from the use of digital markers, and how traditionally linear music can be integrated into an interactive music system.

Right after my Bootcamp presentation, all the Audio Bootcamp presenters and attendees will head off to the ever-popular Lunchtime Surgeries.  No, the attendees won’t actually be able to crack open the minds of the presenters and see what’s going on in there, but as a metaphor, it does represent the core philosophy of this lively event.  The Lunchtime Surgeries offer attendees a chance to sit with the presenters at large roundtables and ask lots of questions.  It’s one of the most popular portions of the bootcamp, and I’ll be looking forward to it!

Winifred-Phillips_GDC-Speaker

If you’ll be attending the GDC Audio Track, then I highly recommend the Audio Bootcamp on Tuesday, March 3rd.  Hope to see you there!

Video excerpt from my game music talk at the 137th Audio Engineering Society Convention

AES-Logo

I was tremendously honored to speak at the Audio Engineering Society’s convention last month, and I thought I’d share a video excerpt from my speech, which was entitled “Effective Interactive Music Systems: The Nuts and Bolts of Dynamic Musical Content.”  Many thanks to Steve Martz and Bob Lee at the Audio Engineering Society for organizing an outstanding event!

More about the AES:

The Audio Engineering Society is the only professional society devoted exclusively to audio technology. Founded in the United States in 1948, the AES has grown to become an international organization that unites audio engineers, creative artists, scientists and students worldwide by promoting advances in audio and disseminating new knowledge and research. Currently, over 14,000 members are affiliated with more than 75 AES professional sections and more than 95 AES student sections around the world. Conventions, which include scientific presentations, student activities, workshops, and exhibitions, are held annually both in the US and Europe. Additional conferences and regional summits are held periodically throughout Latin America, Asia, Europe, and North America.

Talk Description:

Effective Interactive Music Systems: The Nuts and Bolts of Dynamic Musical Content
Interactive methodologies have profoundly impacted the way that music is recorded, mixed and integrated in video games. From horizontal resequencing and vertical layering techniques for the interactive implementation of music recordings, to MIDI and generative systems for the manipulation of music data, the structure of game music poses serious challenges both for the composer and for the game audio engineer. This talk will examine the procedures for designing interactive music models and implementing them effectively into video games. The talk will include comparisons between additive and interchange systems in vertical layering, the lessons that can be learned from conventional stem mixing, the use of markers for switching between segments, and how to disassemble a traditionally composed piece of music for use within an interactive system.

GameSoundCon Industry Survey Results

GameSoundCon

As the GameSoundCon conference draws closer, I thought I’d talk a little bit about the Game Audio Industry Survey that was designed by GameSoundCon Executive Producer Brian Schmidt.  The survey was prepared in response to the broader “Annual Game Developer Salary Survey” offered by industry site Gamasutra.  Since the Gamasutra survey suffered from skewed results for game audio compared to other game industry sectors (owing to lower participation from the game audio community), Schmidt set out to obtain more reliable results by adopting a different approach.

Instead of focusing on the yearly salaries/earnings of audio professionals, the survey concentrated on the money generated by the music/sound of individual projects. Each respondent could fill out the survey repeatedly, entering data for each game project that the respondent had completed during the previous year.  The final results of the survey are meant to reflect how game audio is treated within different types of projects, and the results are quite enlightening, and at times surprising.

GSC-SurveyThe financial results include both small-budget indie games from tiny teams and huge-budget games from behemoth publishers, so there is a broad range in those results.  Since this is the first year that the GameSoundCon Game Audio Industry Survey has been conducted, we don’t yet have data from a previous year with which to compare these results, and it might be very exciting to see how the data shifts if the survey is conducted again in 2015.

Some very intriguing data comes from the section of the survey that provides a picture of who game composers are and how they work.  According to the survey, the majority of game composers are freelancers, and 70% of game music is performed by the freelance composer alone.  56% of composers are also acting as one-stop-shops for music and sound effects, likely providing a good audio solution for indie teams with little or no audio personnel of their own.

A surprising and valuable aspect of the survey is to be found in the audio middleware results, which show that the majority of games use either no audio middleware at all, or opt for custom audio tools designed by the game developer.  This information is quite new, and could be tremendously useful to composers working in the field.  While we should all make efforts to gain experience with audio middleware such as FMOD and Wwise, we might keep in mind that there may not be as many opportunities to practice those skills as had been previously anticipated.  Again, this data might be rendered even more meaningful by the results of the survey next year (if it is repeated), to see if commercial middleware is making inroads and becoming more popular over time.

Expanding upon this subject, the survey reveals that only 22% of composers are ever asked to do any kind of music integration (in which the composer assists the team in implementing music files into their game). It seems that for the time being, this task is still falling firmly within the domain of the programmers on most game development teams.

The survey was quite expansive and fascinating, and I’m very pleased that it included questions about both middleware and integration.  If GameSoundCon runs the survey again next year, I’d love to see the addition of some questions about what type of interactivity composers may be asked to introduce into their musical scores, how much of their music is composed in a traditionally linear fashion, and what the ratio of interactive/adaptive to linear music might be per project.  I wrote rather extensively on this subject in my book, and since I’ll also be giving my talk at GameSoundCon this year about composing music for adaptive systems, I’d be very interested in such survey results!

The GameSoundCon Game Audio Industry Survey is an invaluable resource, and is well worth reading in its entirety.  You’ll find it here.  I’ll be giving my talk on “Advanced Composition Techniques for Adaptive Systems” at GameSoundCon at the Millennium Biltmore Hotel in Los Angeles on Wednesday, October 8th.

Many thanks to Brian Schmidt / GameSoundCon for preparing this excellent survey!

A Composer’s Guide to Game Music – Horizontal Resequencing, Part 1

CCGM-VES-1

Here’s another installment of a four-part series of videos I produced as a supplement to my book, A Composer’s Guide to Game Music. This video focuses on the Horizontal Resequencing model employed in the Speed Racer video game, providing some visual illustration for this interactive music composition technique. The video demonstrates concepts that are explored in depth in my book, beginning on page 188.

A Composer’s Guide to Game Music – Vertical Layering, Part 2

CCGM-VES-1

Here’s part two of a four-part series of videos I produced as a supplement to my book, A Composer’s Guide to Game Music. This video demonstrates concepts that are explored in depth in my book, beginning on page 200.  Expanding on Part One’s discussion of the Vertical Layering employed in The Maw video game, this video provides some visual illustration for the interactive music composition techniques that were implemented in the video game LittleBigPlanet 2: Toy Story.