19th-20th May, Microsoft Reactor, San Francisco CA
Emotions, music and VR are central to what we do at Melodrive. That’s why we’ve decided to organise #EmoJam, a weekend dedicated to hacking emotion, VR, games and music. We have envisaged a hackathon that addresses perhaps the most salient problem for interactive storytelling, AI, and VR: emotion. Emotion is a notorious puzzle for just about any field that deals with the mind/brain. It is clear that we have emotions to help us survive in the world. Without these, we would certainly be a different, although perhaps nonetheless intelligent machine. On the whole, it seems that we are not usually just interested in what intelligence is and how to model it, but we’re specifically interested in a particular brand of intelligence, namely human intelligence. And emotion plays a large part in that.
Not more than a Herculean stone throw away from the wonderful and iconic St Thomas Church, where Bach composed tons of sleek cantatas, lies the stately Hochschule fuer Musik und Theater, the regal setting of the video game musicology conference, Ludo2018. This is one of the field’s foremost annual meets, and took place last weekend (13-15 April) in Leipzig. The Ludo conference was errichtet by the first (and in my opinion, the best) research group in ludomusicology, the Ludomusicology Research Group, the Venerable Elders of which, Michiel Kamp, Tim Summers, Melanie Fritsch, and Mark Sweeney, drop scholars in wonderful locations in the UK and Europe to study what anyone in their right mind would love – music in video games, its history and its impact. Leipzig is a place of great historical, scientific, and artistic significance. Let’s see: Bach, Mendelssohn, Wagner, Goethe, Leibniz, etc., all have strong associations with this place (and ‘etc.’ means a lot in this context).
As a music theorist, and a new member of the Melodrive team, I’m fascinated about how music in video games and VR has evolved and continues to evolve today. This is important to me because it concerns how music has changed in terms of its organisation and sophistication over the years. It is clear, even to someone who is a casual gamer like myself, that the aesthetic significance and evolution of video game music are critical concerns for the history of music, and demand serious attention. Particularly so since from an academic perspective at least, the genre is a little more niche than it deserves to be.
There is something special about NieR: Automata. Developed by Platinum Games and released in 2017, NieR: Automata is a sequel to the cult classic game NieR (Cavia, 2010). Set thousands of years in the future, NieR: Automata is an action role playing game where the player takes control of androids 2B, 9S and A2. Their aim is to rid the earth of alien machines and pave the way for the last humans, who have settled on the moon, to return to earth. Keichii Okabe, the composer for NieR/NieR: Automata and the Drakengard series, uses adaptive music rescored from previous games with aims to induce emotions within the player-character connection.
The famous entrepreneur Ray Kurzweil predicted that by 2029 brains will merge with machines, making people smarter than ever. Even if most of the time we don’t realise it, machines and artificial intelligence (AI) are already extending our capabilities. Think of the last time you visited a website in a language you can’t speak. I would guess you understood its content anyway, thanks to the decent translation provided by Google. What about the last time you asked an AI assistant (Siri, Alexa, Cortana etc.) to find information for you?
In this blog post series, I outline how AI can augment human composers. In particular, I’ll touch on the techniques and the opportunities that AI opens to games composers for adaptive music. (If you don’t know what adaptive music is, have a look at this post I wrote a few months ago for a brief introduction). This first post is going to prepare the field, discussing some of the limitations composers face when working with adaptive music.
We’re always thinking of great examples of game soundtracks here at Melodrive HQ. We decided to come up with our own personal list of the best-of-the-best when it comes to adaptive music in games.
If you’re not sure what we mean when we say ‘adaptive music’, you should check out one of our previous posts, where we talked about the idea in some detail. TL;DR, adaptive music is dynamic and ever-changing. It reacts to the player and the game to intensify the immersion and emotion in the game, and (hopefully) improves their experience.
By the way, this list is just in chronological order and by no means ranks the games.
Ever wanted to do more with the music or SFX in your game? Maybe you want to go beyond triggering audio clips with basic effects towards infinite variations of explosions or gunfire? Maybe your player characters are robots and you want to vocode the player’s microphone input? Perhaps you want complete playable instruments within your game, or unique melodies composed for each user-generated character a la Spore?
If so, then using Pure Data (Pd for short) may be just what you need. Sure, you can do a lot of these things using FMOD and Wwise, but Pd makes the process so simple and elegant, and best of all: it’s free. If this sounds like your cup of tea, then read on!