Once again, the Game Developers Conference is almost upon us! GDC 2018 promises to be an awesome event, chock full of great opportunities for us to learn and grow as video game music composers. I always look forward to the comprehensive sessions on offer in the popular GDC audio track, and for the past few years I’ve been honored to be selected as a GDC speaker. Last year I presented a talk that explored how I built suspense and tension through music I composed for such games as God of War and Homefront: The Revolution. This year, I’m tremendously excited that I’ll be presenting the talk, “Music in Virtual Reality.” The subject matter is very close to my heart! Throughout 2016 and 2017, I’ve composed music for many virtual reality projects, some of which have hit retail over the past year, and some of which will be released very soon. I’ve learned a lot about the process of composing music for a VR experience, and I’ve given a lot of thought to what makes music for VR unique. During my GDC talk in March, I’ll be taking my audience through my experiences composing music for four very different VR games –the Bebylon: Battle Royale arena combat game from Kite & Lightning, the Dragon Front strategy game from High Voltage Software, the Fail Factory comedy game from Armature Studio, and the Scraper: First Strike Shooter/RPG from Labrodex Inc. I’ll talk about some of the top problems that came up, the solutions that were tried, and the lessons that were learned. Virtual Reality is a brave new world for game music composers, and there will be a lot of ground for me to cover in my presentation!
In preparing my talk for GDC, I kept my focus squarely on composition techniques for VR music creation, while making sure to supply an overview of the technologies that would help place these techniques in context. With these considerations in mind, I had to prioritize the information I intended to offer, and some interesting topics simply wouldn’t fit within the time constraints of my GDC presentation. With that in mind, I thought it would be worthwhile to include some of these extra materials in a couple of articles that would precede my talk in March. In this article, I’ll explore some theoretical ideas from experts in the field of VR, and I’ll include some of my own musings about creative directions we might pursue with VR music composition. In the next article, I’ll talk about some practical considerations relating to the technology of VR music.
Most visual artists in the game industry are familiar with a concept known as the “Uncanny Valley,” but it isn’t a problem that typically occupies the attention of sound designers and game music composers. However, with the imminent arrival of virtual reality, that situation may drastically change. Audio folks may have to begin wrestling with the problem right alongside their visual arts counterparts. I’ll explore that issue during the course of this blog, but first let’s start with a basic definition: what is the Uncanny Valley?
Here’s the graphic that is typically shown to illustrate the Uncanny Valley concept. The idea is this: human physical attributes can be endearing. We like human qualities when we see them attached to inhuman things like robots. It makes them cute and relatable. However, as they start getting more and more human in appearance, the cuteness starts going away, and the skin-crawling creepiness begins. The ick-factor reaches maximum in an amorphous no-man’s land right before absolute realism would theoretically be attained. In this realm of horrors known as the “Uncanny Valley,” we see that the appearance of the human-like creature is not close enough to be real, but close enough to be really disturbing. Don’t take my word for it, though. Here’s a great video from the Extra Credits video series that explores the meaning of the Uncanny Valley in more detail:
So, now we’ve explored what the Uncanny Valley means to visual artists, but how does this phenomenon impact the realm of audio?
Spatial Audio – Reconstructing Reality or Creating Illusion?
The idea of an audio equivalent for the Uncanny Valley was suggested by Francis Rumsey during a presentation he gave in May 2014 at the Audio Engineering Society Chicago Section Meeting, which took place at Shure Incorporated in Niles, Illinois. Francis Rumsey holds a PhD in Audio Engineering from the University of Surrey and is currently the chair of the Technical Council of the Audio Engineering Society. His talk was entitled “Spatial Audio – Reconstructing Reality or Creating Illusion?”
Francis Rumsey, chair of the AES Technical Council
In his excellent 90 minute presentation (available for viewing in its entirety by AES members), Francis Rumsey explores the history of spatial audio in detail, examining the long-term effort to reach perfect simulations of natural acoustic spaces. He examines the divergent philosophies of top audio engineers who approach the problem from a creative/artistic point of view, and acousticians who want to solve the dilemma mathematically by virtue of a perfect wave field synthesis technique. Along the way, he asks if spatial audio is really meant to recreate the best version of reality, or instead to conjure up an entertaining artistic illusion? This leads him to the main thesis of his talk:
Sound Design in VR: Almost Perfect Isn’t Perfect Enough
Rumsey suggests that as spatial audio approaches the top-most levels of realism, it begins to stimulate a more critical part of the brain. Why does it do this? Because human listeners react very strongly to a quality we call “naturalness.” We have a great depth of experience in the way environmental sound behaves in the world. We know how it reflects and reverberates, how objects may obstruct the sound or change its perceived timbre. As a simulated aural environment approaches perfect spatial realism and timbral fidelity, our brains begin to compare the simulation to our own remembered experiences of real audio environments, and we start to react negatively to subtle defects in an otherwise perfect simulation. “It sounds almost real,” we think, “but something about it is strange. It’s just wrong, it doesn’t add up.”
Take as an example this Oculus VR video demonstrating GenAudio’s AstoundSound 3D RTI positional 3D audio plugin. While the audio positioning is awesome and impressive, the demo does not incorporate any obstruction or occlusion effects (as the plugin makers readily admit). This makes the demo useful for us in examining the effects of subtle imperfections in an otherwise convincing 3D aural environment. The imperfections become especially pronounced when the gamer walks into the Tuscan house, but the sound of the outdoor fountain continues without any of the muffling obstruction effects one would expect to hear in those circumstances.
Voice in VR: The Uncanny Valley of Spatial Voice
During the presentation, Rumsey shared some of the research from Glenn Dickins, the Technical Architect of the Convergence Team at Dolby Laboratories. Dickins had applied the theory of the Uncanny Valley to vocal recordings. The sound of the human voice in a spatial environment is exceedingly familiar to us as human beings, much in the same way that human appearance and movement are both ingrained in our consciousness. Because of this familiarity, vocal recordings in a spatial environment such as 3D positional audio can be particularly vulnerable to the Uncanny Valley effect. Very small and subtle degradation in the audio output of a spatially localized voice recording may trigger a sense of deep-rooted unease.
Glenn Dickins of Dolby Laboratories
As we embark on three dimensional audio environments for virtual reality games, the sorts of sound compression typically used in video game design may become problematic, particularly in relation to voice recordings in games. While a typical gamer might not recognize that a vocal recording had been compressed, the gamer might nevertheless feel that there was something “not quite right” in the sound of the character’s voices. Compression of audio subtly changes the vocal sound in ways that are usually unnoticeable, but may become disruptive in a VR aural environment in which imperfections have the potential to nudge the audio into the Uncanny Valley.
Music in VR: Some Good News
While I’ve talked in this blog before about the importance of defining the role that music should play in the three-dimensional aural environment of a virtual reality game, Francis Rumsey offers an entirely different viewpoint in his talk. He thinks that when it comes to music, listeners don’t really care about spatial audio. That might be good news for game composers, because this may mean that music may play no role in the Uncanny Valley effect.
Describing a study that was conducted to determine how both naive and experienced listeners perceived spatial audio, Rumsey showed that when it came to listening to music, the spatial positioning wasn’t considered tremendously important. Sound quality was held to be absolutely crucial, but this desire was neither heightened nor lessened by spatial considerations. So does this mean that when it comes to music, listeners have an enhanced suspension of disbelief? Are they willing to accept music into their VR world, even if it isn’t realistically positioned within the 3D space? If so, then this would mean that non-diegetic music (i.e. music that isn’t occurring within the fictional world of the game) may not need to be spatially positioned as carefully as either voice or sound design elements of the aural environment. This may prove useful to audio teams, who may turn to music as a reassuring agent in the soundscape, binding the aural environment together and promoting emotional investment and immersion. However, music’s role in virtual reality may not conform to the way in which listeners react to spatially positioned music in other situations. At any rate, the issue certainly needs further study and experimentation to clarify the role that non-diegetic music should play in a VR game.
For other types of music in VR, the situation may be much simpler. Music doesn’t always have to occupy the traditional “underscore” role that it typically serves during gameplay. In a “music visualizer” VR experience, spatial positioning may become entirely unnecessary, because the music is serving the purpose of pure foreground entertainment (much the same way that music entertains listeners on its own). Here’s a preview of a musically-reactive virtual world in the upcoming “music visualizer” game Harmonix Music VR, created by the developer of the famous and popular game series Rock Band and Dance Central:
Rumsey concluded his talk with the observation that near accurate may be worse than not particularly accurate… in other words, if it’s supposed to sound real, then it had better sound perfectly real. Otherwise, it might be better to opt for a stylized audio environment that exaggerates and heightens the world rather than faithfully reproducing it. I hope you enjoyed this blog, and please let me know what you think in the comments below!
Winifred Phillips is an award-winning video game music composer whose most recent project is the triple-A first person shooter Homefront: The Revolution. Her credits include five of the most famous and popular franchises in video gaming: Assassin’s Creed, LittleBigPlanet, Total War, God of War, and The Sims. She is the author of the award-winning bestseller A COMPOSER’S GUIDE TO GAME MUSIC, published by the Massachusetts Institute of Technology Press. As a VR game music expert, she writes frequently on the future of music in virtual reality video games. Follow her on Twitter @winphillips.