139th AES Convention for the Game Music Composer

AES139_NYC_Banner

I’m happy to share that I’ll be a speaker again this year at the Audio Engineering Society’s annual convention!  Last year, the convention took place at the Los Angeles Convention Center – a familiar stomping ground from my many visits to the famous Electronic Entertainment Expo over the years.  However, this year will take me somewhere entirely new: the Jacob Javits Center in New York City!

jacob-javits-convention-center

I imagine that most futuristic metropolitan buildings look best when the sky is purple.  Since it’s impossible to capture natural purple skies in the wild, I assume that someone helpfully photoshopped a purple firmament for this promo picture.  The convention center looks very impressive, and I’m looking forward to seeing it in person!

Attending last year’s AES in Los Angeles was a wonderful experience, and I was truly honored to have been chosen as a speaker for the event!  At last year’s AES, I gave an overview presentation about interactive music in video games – the talk was an expansion of the interactive music sections of my book, A Composer’s Guide to Game Music.  Here’s a video clip from my speech last year, entitled “Effective Interactive Music Systems: The Nuts and Bolts of Dynamic Musical Content.”  The entire speech is available for download from Mobiltape.com.

At this year’s AES, I’ll be speaking more specifically about my role as a member of the music composition team for the LittleBigPlanet franchise.  It will be fun to share my experiences as part of that wonderful music team at Sony Computer Entertainment Europe, and I’m looking forward to exploring some of the interactive music techniques of the LittleBigPlanet franchise!

4-LBP3-E3

This is a photo from the LittleBigPlanet 3 display in the Sony booth at E3 2014.  My presentation at the Jacob Javits Center will include lots of my music from the LittleBigPlanet franchise, and Sackboy will be making many appearances!

AES-MixBoardI’m also looking forward to seeing what’s new and hot in audio gear on the AES exhibit floor.  Last year’s show floor was crowded with humongous mixing desks like the one above, along with enough glittering gear to make a full-grown audio engineer cry tears of joy.  I’m looking forward to a similar spectacle this year.  In addition to the expo floor, the convention will include a comprehensive program of presentations, panels and workshops, and the popular Live Sound Expo will be returning this year to spread knowledge about audio solutions for live events.

On a more personal note – prior to attending my first AES, I read an article from the ONION (the world’s top news satire publication) which lead me to believe that, as an audio engineer attending such a convention, I would be able to gather with my fellow audio professionals and enjoy an in-depth discussion of our ponytails (warning: adult language).  I can report that this did not happen last year… which was a shame, because I made sure I wore a ponytail for the occasion.  😉

I submit the following photo as proof:

AES-AESDespite this minor disappointment, I had an awesome time at last year’s AES, and I’m very excited about this year’s event!  The convention will take place from Oct. 29th to Nov. 1st at the Jacob Javits Center in New York City.  Hope to see you there!

border-159926_640_white

Studio1_GreenWinifred Phillips is an award-winning video game music composer whose most recent project is the triple-A first person shooter Homefront: The Revolution. Her credits include five of the most famous and popular franchises in video gaming: Assassin’s Creed, LittleBigPlanet, Total War, God of War, and The Sims. She is the author of the award-winning bestseller A COMPOSER’S GUIDE TO GAME MUSIC, published by the Massachusetts Institute of Technology Press. As a VR game music expert, she writes frequently on the future of music in virtual reality video games. Follow her on Twitter @winphillips.

Game Music Middleware, Part 5: psai

Middleware-Blackboard

 

This is a continuation of my blog series on the top audio middleware options for game music composers, this time focusing on the psai Interactive Music Engine for games, developed by Periscope Studio, an audio/music production house. Initially developed as a proprietary middleware solution for use by Periscope’s in-house musicians, the software is now being made available commercially for use by game composers.  In this blog I’ll take a quick look at psai and provide some tutorial resources that will further explore the utility of this audio middleware.  If you’d like to read the first four blog entries in this series on middleware for the game composer, you can find them here:

Game Music Middleware, Part 1: Wwise

Game Music Middleware, Part 2: FMOD

Game Music Middleware, Part 3: Fabric

Game Music Middleware, Part 4: Elias

What is psai?

The name “psai” is an acronym for “Periscope Studio Audio Intelligence,” and its lowercase appearance is intentional.  Like the Elias middleware (explored in a previous installment of this blog series), the psai application attempts to provide a specialized environment specifically tailored to best suit the needs of game composers.  The developers at Periscope Studio claim that psai’s “ease of use is unrivaled,” primarily because the middleware was “designed by videogame composers, who found that the approaches of conventional game audio middleware to interactive music were too complicated and not flexible enough.”  The psai music engine was originally released for PC games, with a version of the software for the popular Unity engine released in January 2015.

psai graphical user interface

psai graphical user interface

Both Elias and psai offer intuitive graphical user interfaces designed to ease the workflow of a game composer. However, unlike Elias, which focused exclusively on a vertical layering approach to musical interactivity, the psai middleware is structured entirely around horizontal re-sequencing, with no support for vertical layering.  As I described in my book, A Composer’s Guide to Game Music, “the fundamental idea behind horizontal re-sequencing is that when composed carefully and according to certain rules, the sequence of a musical composition can be rearranged.” (Chapter 11, page 188).

Music for the psai middleware is composed in what Periscope describes as a “snippets” format, in which short chunks of music are arranged into groups that can then be triggered semi-randomly by the middleware.  The overall musical composition is called a “theme,” and the snippets represent short sections of that theme.  The snippets are assigned numbers that best represent degrees of emotional intensity (from most intense to most relaxed), and these intensity numbers help determine which of the snippets will be triggered at any given time.  Other property assignments include whether a snippet is designated as an introductory or ending segment, or whether the snippet is bundled into a “middle” group with a particular intensity designation.  Periscope cautions, “The more Middle Segments you provide, the more diversified your Theme will be. The more Middle Segments you provide for a Theme, the less repetition will occur. For a highly dynamic soundtrack make sure to provide a proper number of Segments across different levels of intensity.”

Here’s an introductory tutorial video produced by Periscope for the psai Interactive Music Engine for videogames:

Because psai only supports horizontal re-sequencing, it’s not as flexible as the more famous tools such as Wwise or FMOD, which can support projects that alternate between horizontal and vertical interactivity models.  However, psai’s ease of use may prove alluring for composers who had already planned to implement a horizontal re-sequencing structure for musical interactivity.  The utility of the psai middleware also seems to depend on snippets that are quite short, as is demonstrated by the above tutorial video produced by Periscope Studio.  There could be some negative effects of this structure on a composer’s ability to develop melodic content (as is sometimes the case in a horizontal re-sequencing model).  It would be helpful if Periscope could demonstrate psai using longer snippets that might give us a better sense of how musical ideas might be developed within the confines of their dynamic music system.  One can imagine an awesome potential for creativity with this system, if the structure can be adapted to allow for more development of musical ideas over time.

The psai middleware has been used successfully in a handful of game projects, including Black Mirror III, Lost Chronicles of Zerzura, Legends of Pegasus, Mount & Blade II – Bannerlord, and The Devil’s Men.  Here’s some gameplay video that demonstrates the music system of Legends of Pegasus:

And here is some gameplay video that demonstrates the music system of Mount & Blade II – Bannerlord:

border-159926_640_white

Studio1_GreenWinifred Phillips is an award-winning video game music composer whose most recent project is the triple-A first person shooter Homefront: The Revolution. Her credits include five of the most famous and popular franchises in video gaming: Assassin’s Creed, LittleBigPlanet, Total War, God of War, and The Sims. She is the author of the award-winning bestseller A COMPOSER’S GUIDE TO GAME MUSIC, published by the Massachusetts Institute of Technology Press. As a VR game music expert, she writes frequently on the future of music in virtual reality video games. Follow her on Twitter @winphillips.

A Composer’s Guide to Game Music wins the Nonfiction Book Award!

NonFictionBookAwards-Photo

I’m excited to share some awesome news!  My book, A Composer’s Guide to Game Music, has been selected as a Gold winner of the Nonfiction Book Awards!

The Nonfiction Authors Association presents Bronze, Silver and Gold Awards in its Nonfiction Book Awards competition to honor the best book-length publications in an array of nonfiction genres.  A Composer’s Guide to Game Music was recognized with a Gold award (the top honor presented by the awards competition) in the “Arts, Music, and Photography” category.

Here’s how A Composer’s Guide to Game Music was described by Stephane Chandler, the founder of the Nonfiction Authors Association:

Winifred Phillips presents music composition for a specific genre and audience in an easy-to-understand way, whether for seasoned composers or self-taught music enthusiasts looking to create a beautifully composed work for the video game market. Phillips goes above and beyond, guiding her reader through not only the composition process, but everything else tied to producing music for video games, including but not limited to working with teams and how to understand key audience demographics.

My most sincere appreciation goes out to the judging panel of the Nonfiction Book Awards for this honor!

This is the fourth award presented to A Composer’s Guide to Game Music (The MIT Press).  To date, the book has also won a National Indie Excellence Book Award, a Global Music Award for an exceptional book in the field of music, and an Annual Game Music Award from the popular site Game Music Online in the category of “Best Publication.”

A Composer's Guide to Game Music won a National Indie Excellence Book Award in the genre of Performing Arts (Film, Theater, Dance & Music).

A Composer’s Guide to Game Music won a National Indie Excellence Book Award in the genre of Performing Arts (Film, Theater, Dance & Music).

Global-Music-Award_SM
The Global Music Awards presented a Gold Medal Award of Excellence as a GMA Book Award to A Composer’s Guide to Game Music, which was judged as exceptional in the field of music.

 

The staff of accomplished music journalists of Game Music Online has presented awards in many categories that acknowledge the diversity and range of the video game music genre.

The staff of accomplished music journalists of Game Music Online presented a “Best Publication” award to A Composer’s Guide to Game Music, acknowledging its “accessible yet deep insight into the process of making game music.”

border-159926_640_white

Studio1_GreenWinifred Phillips is an award-winning game music composer with more than 11 years of experience in the video game industry.  Her projects include such famous games as Assassin’s Creed Liberation, God of War, the LittleBigPlanet franchise, and many others.  She is the author of the award-winning bestseller A COMPOSER’S GUIDE TO GAME MUSIC, published by the Massachusetts Institute of Technology Press.  Follow her on Twitter @winphillips.

Interview about Game Music on The Note Show!

The_Note_Show1

I’m excited to share that I’ve been interviewed about my career as a game music composer and my book, A Composer’s Guide to Game Music, for the newest episode of The Note Show!

The Note Show is a terrific podcast that focuses on interviews with professionals in creative fields.  I’m very proud to have been included! Famous guests on The Note Show have included Hugo and Nebula award-winning sci-fi author David Brin, actress Kristina Anapau of the HBO series True Blood, video game designer Al Lowe (Leisure Suit Larry), actress Lisa Jakub (Mrs. Doubtfire, Independence Day), and Steven Long Mitchell and Craig Van Sickle, creators of the NBC series The Pretender.

A-Composers-Guide-To-Game-Music_Book-Cover
This is my second time being interviewed on The Note Show, and I’m so glad to have been invited back!

In this interview, I talk about my work on the LittleBigPlanet and Assassin’s Creed franchises, my latest project (Total War Battles: Kingdom), how composing music for a mobile game differs from composing for consoles or PC, and how my life has changed with the publication of my book, A Composer’s Guide to Game Music.

In the podcast, we also talk about the National Indie Excellence Book Award that my book recently won, as well as the importance of optimism for an aspiring game composer.

border-159926_640_white

You can listen to the entire interview here:

wphillips_studio1

 

Here’s some official info from the creators of The Note Show:

The Creative Professional Podcast – Music & Arts Interviews

The Note Show is a creative journey where host Joshua Note returns to chat life and art with creative people across the world. We interview musicians, artists, comic book creators, novelists, directors, actors and anyone creative and bring you new people and experiences every week!  The Note Show is a Podcast for and featuring Creative Professionals from all walks of life. As long as it’s creative, it’s here on The Note Show.

The show’s host, Joshua Note, is a terrific interviewer who is also the author of a children’s book due for release in 2015.  In addition, Joshua studied classical composition and orchestration at Leeds College of Music and Leeds University, and in 2012 he produced a for-television animated series and worked on several projects for television and cinema.

Joshua Note, host of The Note Show

Joshua Note, host of The Note Show

In his role as the host of The Note Show, Joshua asks intelligent questions about what it means to be a creative person in modern times, and his interviews are always fascinating!  My thanks to Joshua and the staff of The Note Show – I had a great time!

A Composer’s Guide to Game Music wins National Indie Excellence Book Award

Indie-Excellence-Award_SM

I have some good news to share this week!  My book, A Composer’s Guide to Game Music, has been selected as a winner of this year’s National Indie Excellence Book Award!

Now in its ninth year, the National Indie Excellence Book Awards recognizes outstanding achievement in books from independent publishers, including scholarly and university presses.  A Composer’s Guide to Game Music won the National Indie Excellence Book Award this year for the genre of Performing Arts (Film, Theater, Dance & Music).  Many thanks to the judging panel of the National Indie Excellence Book Awards for this honor!

This is the third award presented to A Composer’s Guide to Game Music (The MIT Press).  To date, the book has also won a Global Music Award for an exceptional book in the field of music, and an Annual Game Music Award from Game Music Online in the category of “Best Publication.”

Global-Music-Award_SM
The Global Music Awards presented a Gold Medal Award of Excellence as a GMA Book Award to A Composer’s Guide to Game Music, which was judged as exceptional in the field of music.

 

The staff of accomplished music journalists of Game Music Online has presented awards in many categories that acknowledge the diversity and range of the video game music genre.

The staff of accomplished music journalists of Game Music Online presented a “Best Publication” award to A Composer’s Guide to Game Music, acknowledging its “accessible yet deep insight into the process of making game music.”

 

The Virtual Reality Game Music Composer

Morpheus

Project Morpheus headset.

Ready or not, virtual reality is coming!  Three virtual reality headsets are on their way to market and expected to hit retail in either late 2015 or sometime in 2016.  These virtual reality systems are:

VR is expected to make a big splash in the gaming industry, with many studios already well underway with development of games that support the new VR experience.  Clearly, VR will have a profound impact on the visual side of game development, and certainly sound design and voice performances will be impacted by the demands of such an immersive experience… but what about music?  How does music fit into VR?

At GDC 2015, a presentation entitled “Environmental Audio and Processing for VR” laid out the technology of audio design and implementation for Sony’s Project Morpheus system.  While the talk concentrated mainly on sound design concerns, speaker Nicholas Ward-Foxton (audio programmer for Sony Computer Entertainment) touched upon voice-over and music issues as well.  Let’s explore his excellent discussion of audio implementation for a virtual space, and ponder how music fits into this brave new virtual world.

Nick2

Nicholas Ward-Foxton, during his GDC 2015 talk.

But first, let’s get a brief overview on audio in VR:

3D Positional Audio

All three VR systems feature some sort of positional audio, meant to achieve a full 3D Audio Effect.  With the application of the principles of 3D Audio, sounds will always seem to be originating from the virtual world in a realistic way, according to the location of the sound-creating object, the force/loudness of the sound being emitted, the acoustic character of the space in which the sound is occurring, and the influences of obstructing, reflecting and absorbing objects in the surrounding environment.  The goal is to create a soundscape that seems perfectly fused with the visual reality presented to the player.  Everything the player hears seems to issue from the virtual world with acoustic qualities that consistently confirm an atmosphere of perfect realism.

All three VR systems address the technical issues behind achieving this effect with built-in headphones that deliver spatial audio consistent with the virtual world.  The Oculus Rift licensed the  Visisonics RealSpace 3D Audio plugin to calculate acoustic spatial cues, then subsequently built their own 3D Audio plugin based on the RealSpace technology, allowing their new Oculus Audio SDK to generate the system’s impressive three-dimensional sound.  According to Sony, Project Morpheus creates its 3D sound by virtue of binaural recording techniques (in which two microphones are positioned to mimic natural ear spacing), implemented into the virtual environment with a proprietary audio technology developed by Sony.  The HTC Vive has only recently added built-in headphones to its design, but the developers plan to offer full 3D audio as part of the experience.

To get a greater appreciation of the power of 3D audio, let’s listen to the famous “Virtual Barber Shop” audio illusion, created by QSound Labs to demonstrate the power of Binaural audio.

Head Tracking and Head-Related Transfer Function

According to Nicholas Ward-Foxton’s GDC talk, to make the three-dimensional audio more powerful in a virtual space, the VR systems need to keep track of the player’s head movements and adjust the audio positioning accordingly.  With this kind of head tracking, sounds swing around the player when turning or looking about.  This effect helps to offset an issue of concern in regards to the differences in head size and ear placement between individuals.  In short, people have differently sized noggins, and their perception of audio (including the 3D positioning of sounds) will differ as a result.  This dependance on the unique anatomical details of the individual listener is known as Head-Related Transfer Function.  There’s an excellent article explaining Head-Related Transfer Function on the “How Stuff Works” site.

Head-Related Transfer Function can complicate things when trying to create a convincing three-dimensional soundscape.  When listening to identical binaural audio content, one person may not interpret aural signals the same way another would, and might estimate that sounds are positioned differently.  Fortunately, head tracking comes to the rescue here.  As Ward-Foxton explained during his talk, when we move our heads about and then listen to the way that the sounds shift in relation to our movements, our brains are able to adjust to any differences in the way that sounds are reaching us, and our estimation of the spatial origination of individual sounds becomes much more reliable.  So the personal agency of the gaming experience is a critical element in completing the immersive aural world.

Music, Narration, and the Voice of God

the-creation-of-adam-436007_1280

Now, here’s where we start talking about problems relating directly to music in a VR game.  Nicholas Ward-Foxton’s talk touched briefly on the issues facing music in VR by exploring the two classifications that music may fall into. When we’re playing a typical video game, we usually encounter both diegetic and non-diegetic audio content.  Diegetic audio consists of sound elements that are happening in the fictional world of the game, such as environment sounds, sound effects, and music being emitted by in-game sources such as radios, public address systems, NPC musicians, etc.  On the other hand, non-diegetic audio consists of sound elements that we understand to be outside the world of the story and its characters, such as a voice-over narration, or the game’s musical score.  We know that the game characters can’t hear these things, but it doesn’t bother us that we can hear them.  That’s just a part of the narrative.

VR changes all that.  When we hear a disembodied, floating voice from within a virtual environment, we sometimes feel, according to Ward-Foxton, as though we are hearing the voice of God.  Likewise, when we hear music in a VR game, we may sometimes perceive it as though it were God’s underscore.  I wrote about the problems of music breaking immersion as it related to mixing game music in surround sound in Chapter 13 of my book, A Composer’s Guide to Game Music, but the problem becomes even more pronounced in VR.  When an entire game is urging us to suspend our disbelief fully and become completely immersed, the sudden intrusion of the voice of the Almighty supported by the beautiful strains of the holy symphony orchestra has the potential to be pretty disruptive.

angel-328437_640

The harpist of the Almighty, hovering somewhere in the VR world…

So, what can we do about it?  For non-diegetic narration, Ward-Foxton suggested that the voice would have to be contextualized within the in-game narrative in order for the “voice of God” effect to be averted.  In other words, the narration needs to come from some explainable in-game source, such as a radio, a telephone, or some other logical sound conveyance that exists in the virtual world.  That solution, however, doesn’t work for music, so it’s time to start thinking outside the box.

Voice in our heads

During the Q&A portion of Ward-Foxton’s talk, an audience member asked a very interesting question.  When the player is assuming the role of a specific character in the game, and that character speaks, how can the audio system make the resulting spoken voice sound the way it would to the ears of the speaker?  After all, whenever any of us speak aloud, we don’t hear our voices the way others do.  Instead, we hear our own voice through the resonant medium of our bodies, rising from our larynx and reverberating throughout our own unique formantor acoustical vocal tract.  That’s why most of us perceive our voices as being deeper and richer than they sound when we hear them in a recording.

Ward-Foxton suggested that processing and pitch alteration might create the effect of a lower, deeper voice, helping to make the sound seem more internal and resonant (the way it would sound to the actual speaker).  However, he also mentioned another approach to this issue earlier in his talk, and I think this particular approach might be an interesting solution for the “music of God” problem as well.

Proximity Effect

“I wanted to talk about proximity,” said Ward-Foxton, “because it’s a really powerful effect in VR, especially audio-wise.”  Referencing the Virtual Barber Shop audio demo from QSound Labs, Ward-Foxton talked about the power of sounds that seem to be happening “right in your personal space.”  In order to give sounds that intensely intimate feeling when they become very close, Ward-Foxton’s team would apply dynamic compression and bass boost to the sounds, in order to simulate the Proximity Effect.

The Proximity Effect is a phenomenon related to the physical construction of microphones, making them prone to add extra bass and richness when the source of the recording draws very close to the recording apparatus.  This concept is demonstrated and explained in much more depth in this video produced by Dr. Alexander J. Turner for the blog Nerds Central:

So, if simulating the Proximity Effect can make a voice sound like it’s coming from within, as Ward-Foxton suggests, can applying some of the principles of the Proximity Effect make the music sound like it’s coming from within, too?

Music in our heads

This was the thought that crossed my mind during this part of Ward-Foxton’s talk on “Environmental Audio and Processing for VR.”  In traditional music recording, instruments are assigned a position on the stereo spectrum, and the breadth from left to right can feel quite wide.  Meanwhile, the instruments (especially in orchestral recordings) are often recorded in an acoustic space that would be described as “live,” or reverberant to some degree.  This natural reverberance is widely regarded as desirable for an acoustic or orchestral recording, since it creates a sensation of natural space and allows the sounds of the instruments to blend with the assistance of the sonic reflections from the recording environment.  However, it also creates a sensation of distance between the listener and the musicians.  The music doesn’t seem to be invading our personal space.  It’s set back from us, and the musicians are also spread out around us in a large arc shape.

So, in VR, these musicians would be invisibly hovering in the distance, their sounds emitting from defined positions in the stereo spectrum. Moreover the invisible musicians would fly around as we turn our heads, maintaining their position in relation to our ears, even as the sound design elements of the in-game environment remain consistently true to their places of origin in the VR world.  Essentially, we’re listening to the Almighty’s holy symphony orchestra.  So, how can we fix this?

One possible approach might be to record our music with a much more intimate feel.  Instead of choosing reverberant spaces, we might record in perfectly neutral spaces and then add very subtle amounts of room reflection to assist in a proper blend without disrupting the sensation of intimacy.  Likewise, we might somewhat limit the stereo positioning of our instruments, moving them a bit more towards the center.  Finally, a bit of prudently applied compression and EQ might add the extra warmth and intimacy needed in order to make the music feel close and personal.  Now, the music isn’t “out there” in the game world.  Now, the music is in our heads.

Music in VR

It will be interesting to see the audio experimentation that is surely to take place in the first wave of VR games.  So far, we’ve only been privy to tech demos showing the power of the VR systems, but the music in these tech demos has given us a brief peek at what music in VR might be like in the future.  So far, it’s been fairly sparse and subtle… possibly a response to the “music of the Almighty” problem.  It is interesting to see how this music interacts with the gameplay experience.  Ward-Foxton mentioned two particular tech demos during his talk.  Here’s the first, called “Street Luge.”

The simple music of this demo, while quite sparse, does include some deep, bassy tones and some dry, close-recorded percussion.  Also, the stereo breadth appears to be a bit narrow as well, but this may not have been intentional.

The second tech demo mentioned during Ward-Foxton’s talk was “The Deep.”

The music of this tech demo is limited to a few atmospheric synth tones and a couple of jump-scare stingers, underscored by a deep low pulse.  Again, the music doesn’t seem to have a particularly wide stereo spectrum, but this may not have been a deliberate choice.

I hope you enjoyed this exploration of some of the concepts included in Nicholas Ward-Foxton’s talk at GDC 2015, along with my own speculation about possible approaches to problems related to non-diegetic music in virtual reality.  Please let me know what you think in the comments!

GDC 2015 Book Signing – A Composer’s Guide to Game Music

BookStoreBanner

It was wonderful to sign copies of my book last week at the book signing event organized by The MIT Press for my book, A COMPOSER’S GUIDE TO GAME MUSIC!  The book signing took place on March 6th at the official GDC Bookstore in the Moscone Center (South Hall) during this year’s Game Developers Conference in San Francisco!  It was a lot of fun!  So terrific to meet such a great group of composers and sound designers, and I loved hearing about your creative endeavors in the world of game audio!  I’m also very humbled and pleased that you’re using my book to help you with your projects!  Here’s a gallery with pictures of some of the folks who were at the book signing last week! Just click on the first thumbnail image to open the full-sized gallery.

Book Signing Event at the GDC Bookstore

I’m very pleased that The MIT Press, publishers of my book A COMPOSER’S GUIDE TO GAME MUSIC, have arranged for me to sign copies of my book at the official GDC Bookstore during this year’s Game Developers Conference!

CGGM-BookSigning-2015_sign

 

This year, A COMPOSER’S GUIDE TO GAME MUSIC has won the Global Music Award for an exceptional book in the field of music, and an Annual Game Music Award for Best Publication in the field of game music.  I’m very pleased that my book will be featured at the GDC bookstore this year, and I’m looking forward to the signing event on March 6th!

agmas-cggm-small

BreakPoint Books is the official Game Developers Conference bookstore.  You’ll find them on the street level in South Hall of the Moscone Center.

BookStoreBanner

If you buy my book at any time during the conference, you can bring it back during the book signing on Friday so that I can sign it for you!  Plus, I’d love to meet you!

Remember, the GDC Bookstore is located in the outer lobby of Moscone South Hall, so you don’t need a GDC pass to shop there.  If you’re in the San Francisco area and would like to have a copy of my book signed, please feel free to stop by!

Game Music Interview on Top Score Podcast

TopScore-Book-Twitter

I’m very pleased to report that I was invited to be interviewed on the celebrated Top Score Podcast, and the interview was just released today!  You can listen to the entire interview here:

 

I was interviewed by Emily Reese, a formidable presence in the game music industry.  As a host at Classical Minnesota Public Radio, Reese has produced and hosted this special podcast dedicated to the field of game music since April of 2011.  The podcast specializes in interviews with game music composers, and has produced over 129 episodes so far.  In this capacity, Reese is a champion of game music and the people who make it.  She has predicted that game music will one day join the playlists of classical radio stations, right alongside the music of Vaughan Williams, Bernstein or Mozart. “There are some scores where absolutely that will happen,” Reese says. “Some of these composers should be remembered in 100, 200 or 300 years. They’re that good.”

Emily Reese interviewed me about my music for LittleBigPlanet 3 and Assassin’s Creed Liberation.  She also asked me questions about my book, A Composer’s Guide to Game Music.   All my thanks to Emily Reese, editor Pierce Huxtable and the entire team at the Top Score Podcast.  I’m honored to have been interviewed for this terrific program!

How I got my big break as a video game music composer

I had a wonderful time last week, speaking before a lively and enthusiastic audience at the Society of Composers & Lyricists seminar, “Inside the World of Game Music.”  Organized by Greg Pliska (board member of the SCL NY), the event was moderated by steering committee member Elizabeth Rose and attended by a diverse audience of composers and music professionals.  Also, steering committee member Tom Salta joined the discussion remotely from his studio via Skype.

SCL-GameMusic-Feb2015

Towards the beginning of the evening, I was asked how I got my first big break in the game industry.  While I’d related my “big break” experience in my book, A Composer’s Guide to Game Music, it was fun sharing those memories with such a great audience, and I’ve included a video clip from that portion of the seminar.

After the event, we all headed over to O’Flanagan’s Irish Pub for great networking and good times at the official NYC SCL/Game Audio Network Guild G.A.N.G. Hang.  I especially enjoyed sharing some stories and getting to know some great people there!  Thanks to everyone who attended the SCL NYC seminar!