Game Music Middleware, Part 5: psai

Middleware-Blackboard

 

This is a continuation of my blog series on the top audio middleware options for game music composers, this time focusing on the psai Interactive Music Engine for games, developed by Periscope Studio, an audio/music production house. Initially developed as a proprietary middleware solution for use by Periscope’s in-house musicians, the software is now being made available commercially for use by game composers.  In this blog I’ll take a quick look at psai and provide some tutorial resources that will further explore the utility of this audio middleware.  If you’d like to read the first four blog entries in this series on middleware for the game composer, you can find them here:

Game Music Middleware, Part 1: Wwise

Game Music Middleware, Part 2: FMOD

Game Music Middleware, Part 3: Fabric

Game Music Middleware, Part 4: Elias

What is psai?

The name “psai” is an acronym for “Periscope Studio Audio Intelligence,” and its lowercase appearance is intentional.  Like the Elias middleware (explored in a previous installment of this blog series), the psai application attempts to provide a specialized environment specifically tailored to best suit the needs of game composers.  The developers at Periscope Studio claim that psai’s “ease of use is unrivaled,” primarily because the middleware was “designed by videogame composers, who found that the approaches of conventional game audio middleware to interactive music were too complicated and not flexible enough.”  The psai music engine was originally released for PC games, with a version of the software for the popular Unity engine released in January 2015.

psai graphical user interface

psai graphical user interface

Both Elias and psai offer intuitive graphical user interfaces designed to ease the workflow of a game composer. However, unlike Elias, which focused exclusively on a vertical layering approach to musical interactivity, the psai middleware is structured entirely around horizontal re-sequencing, with no support for vertical layering.  As I described in my book, A Composer’s Guide to Game Music, “the fundamental idea behind horizontal re-sequencing is that when composed carefully and according to certain rules, the sequence of a musical composition can be rearranged.” (Chapter 11, page 188).

Music for the psai middleware is composed in what Periscope describes as a “snippets” format, in which short chunks of music are arranged into groups that can then be triggered semi-randomly by the middleware.  The overall musical composition is called a “theme,” and the snippets represent short sections of that theme.  The snippets are assigned numbers that best represent degrees of emotional intensity (from most intense to most relaxed), and these intensity numbers help determine which of the snippets will be triggered at any given time.  Other property assignments include whether a snippet is designated as an introductory or ending segment, or whether the snippet is bundled into a “middle” group with a particular intensity designation.  Periscope cautions, “The more Middle Segments you provide, the more diversified your Theme will be. The more Middle Segments you provide for a Theme, the less repetition will occur. For a highly dynamic soundtrack make sure to provide a proper number of Segments across different levels of intensity.”

Here’s an introductory tutorial video produced by Periscope for the psai Interactive Music Engine for videogames:

Because psai only supports horizontal re-sequencing, it’s not as flexible as the more famous tools such as Wwise or FMOD, which can support projects that alternate between horizontal and vertical interactivity models.  However, psai’s ease of use may prove alluring for composers who had already planned to implement a horizontal re-sequencing structure for musical interactivity.  The utility of the psai middleware also seems to depend on snippets that are quite short, as is demonstrated by the above tutorial video produced by Periscope Studio.  There could be some negative effects of this structure on a composer’s ability to develop melodic content (as is sometimes the case in a horizontal re-sequencing model).  It would be helpful if Periscope could demonstrate psai using longer snippets that might give us a better sense of how musical ideas might be developed within the confines of their dynamic music system.  One can imagine an awesome potential for creativity with this system, if the structure can be adapted to allow for more development of musical ideas over time.

The psai middleware has been used successfully in a handful of game projects, including Black Mirror III, Lost Chronicles of Zerzura, Legends of Pegasus, Mount & Blade II – Bannerlord, and The Devil’s Men.  Here’s some gameplay video that demonstrates the music system of Legends of Pegasus:

And here is some gameplay video that demonstrates the music system of Mount & Blade II – Bannerlord:

border-159926_640_white

Studio1_GreenWinifred Phillips is an award-winning video game music composer whose most recent project is the triple-A first person shooter Homefront: The Revolution. Her credits include five of the most famous and popular franchises in video gaming: Assassin’s Creed, LittleBigPlanet, Total War, God of War, and The Sims. She is the author of the award-winning bestseller A COMPOSER’S GUIDE TO GAME MUSIC, published by the Massachusetts Institute of Technology Press. As a VR game music expert, she writes frequently on the future of music in virtual reality video games. Follow her on Twitter @winphillips.

Game Music Middleware, Part 2: FMOD

Middleware-Blackboard

 

Since third-party audio middleware in game development is becoming slowly more prevalent, I’m devoting two blog entries to some tutorials by game audio pros who have produced videos to demonstrate their process working with the software.  I posted the first blog entry in February — you can read it here.

This second blog is devoted to FMOD, and the tutorials were produced by two game composers who have generously shared their experiences.  The first video focuses on the creation of adaptive music for a demo competition hosted by the Game Audio Network Guild and taking place during the Game Developers Conference 2014 in San Francisco.  The tutorial was produced by composer Anastasia Devana, whose game credits include the recently released puzzle game Synergy and the upcoming roleplaying game Anima – Gate of Memories.

Adaptive music in Angry Bots using FMOD Studio and Unity

border-159926_640_white The next tutorials come to us from composer Matthew Pablo, who produced a series of videos on the implementation of game music via the FMOD middleware.  Matthew’s work as a game composer includes N-Dimensions, Arizona Rose and the Pharaoh’s Riddles, Cloud Spin, Micromon, and many more.  Here is the first video — the rest can be found in Matthew’s YouTube playlist.

FMOD Studio: Overview & Introduction

 

 

GameSoundCon Industry Survey Results

GameSoundCon

As the GameSoundCon conference draws closer, I thought I’d talk a little bit about the Game Audio Industry Survey that was designed by GameSoundCon Executive Producer Brian Schmidt.  The survey was prepared in response to the broader “Annual Game Developer Salary Survey” offered by industry site Gamasutra.  Since the Gamasutra survey suffered from skewed results for game audio compared to other game industry sectors (owing to lower participation from the game audio community), Schmidt set out to obtain more reliable results by adopting a different approach.

Instead of focusing on the yearly salaries/earnings of audio professionals, the survey concentrated on the money generated by the music/sound of individual projects. Each respondent could fill out the survey repeatedly, entering data for each game project that the respondent had completed during the previous year.  The final results of the survey are meant to reflect how game audio is treated within different types of projects, and the results are quite enlightening, and at times surprising.

GSC-SurveyThe financial results include both small-budget indie games from tiny teams and huge-budget games from behemoth publishers, so there is a broad range in those results.  Since this is the first year that the GameSoundCon Game Audio Industry Survey has been conducted, we don’t yet have data from a previous year with which to compare these results, and it might be very exciting to see how the data shifts if the survey is conducted again in 2015.

Some very intriguing data comes from the section of the survey that provides a picture of who game composers are and how they work.  According to the survey, the majority of game composers are freelancers, and 70% of game music is performed by the freelance composer alone.  56% of composers are also acting as one-stop-shops for music and sound effects, likely providing a good audio solution for indie teams with little or no audio personnel of their own.

A surprising and valuable aspect of the survey is to be found in the audio middleware results, which show that the majority of games use either no audio middleware at all, or opt for custom audio tools designed by the game developer.  This information is quite new, and could be tremendously useful to composers working in the field.  While we should all make efforts to gain experience with audio middleware such as FMOD and Wwise, we might keep in mind that there may not be as many opportunities to practice those skills as had been previously anticipated.  Again, this data might be rendered even more meaningful by the results of the survey next year (if it is repeated), to see if commercial middleware is making inroads and becoming more popular over time.

Expanding upon this subject, the survey reveals that only 22% of composers are ever asked to do any kind of music integration (in which the composer assists the team in implementing music files into their game). It seems that for the time being, this task is still falling firmly within the domain of the programmers on most game development teams.

The survey was quite expansive and fascinating, and I’m very pleased that it included questions about both middleware and integration.  If GameSoundCon runs the survey again next year, I’d love to see the addition of some questions about what type of interactivity composers may be asked to introduce into their musical scores, how much of their music is composed in a traditionally linear fashion, and what the ratio of interactive/adaptive to linear music might be per project.  I wrote rather extensively on this subject in my book, and since I’ll also be giving my talk at GameSoundCon this year about composing music for adaptive systems, I’d be very interested in such survey results!

The GameSoundCon Game Audio Industry Survey is an invaluable resource, and is well worth reading in its entirety.  You’ll find it here.  I’ll be giving my talk on “Advanced Composition Techniques for Adaptive Systems” at GameSoundCon at the Millennium Biltmore Hotel in Los Angeles on Wednesday, October 8th.

Many thanks to Brian Schmidt / GameSoundCon for preparing this excellent survey!

Music in the Manual: FMOD Studio Vs. Wwise

Wwise-FMOD

A few days ago, I downloaded and installed the latest version of a software package entitled FMOD Studio and was pleasantly surprised to discover that an oversight had been corrected. It’s not unusual for software updates to correct problems or provide additional functionality, but this update was especially satisfying for me. The makers of FMOD Studio had added the “Music” section to the software manual.

A brief explanation: FMOD Studio is a software application designed by Firelight Technologies to enable game audio professionals to incorporate sound into video games. The application focuses solely on audio, and is used in conjunction with game software. In essence, FMOD Studio is folded into the larger construct of a game’s operational code, giving the overall game the ability to do more sophisticated things with the audio side of its presentation.

When FMOD Studio was initially released in August of 2012, the manual did not include information about the music capabilities of the software. Admittedly, the majority of FMOD Studio users are sound designers whose interests tend to focus on the tools for triggering sound effects and creating environmental atmospheres. That being said, many composers also use the portions of the FMOD Studio application that are specifically designed to enable the assignment of interactive behaviors to music tracks. It was a bit puzzling that the manual didn’t describe those music tools.

One of the biggest competitors to FMOD Studio is the Wwise software from Audiokinetic. Wwise offers much of the same functionality as FMOD, and in working with the software one of the things I really like about it is its documentation. Audiokinetic put a lot of thought and energy into the Wwise Fundamentals Approach document and the expansive tutorial handbook, Project Adventure. Both of these documents discuss the music features of the Wwise software, offering step-by-step guidance for the creation of interactive music systems within the Wwise application. This is why the omission of any discussion of the music tools from the FMOD manual was so perplexing.

It’s true that many of the music features of the FMOD Studio software are also useful in sound design applications, and some are similar in their function to tools described in the sound design portions of the manual. Firelight Technologies may have assumed that those portions of the manual would be sufficient for all users, including composers. However, composers are specialists, and their priorities do not match those of their sound design colleagues. In using the FMOD Studio tools, the needs of composers would be sharply different from those driving the rest of the audio development community. Wwise understood this from the start, but FMOD seemed to be following a philosophy that hearkened back to the early days of the game industry.

In those days, the audio side of a game was often created and implemented by a single person. This jack-of-all-trades would create all the sound effects, voice-overs and music. Nowadays, the audio field is populated by scores of specialists. It makes sense for FMOD Studio to acknowledge specialists such as composers in their software documentation, and I’m very glad to see that they’ve just done so. If you’d like to learn more about FMOD Studio, you can see a general overview of the application in this YouTube video: