We’ve conducted a number of studies in games and interactive media to understand what people think about music and interactive media.
Our main goal was to get received opinion on current music solutions, to understand thoughts on interactive and adaptive music, to know what people might want in terms of these ideas and to look at how they see Melodrive as fitting in with these aspirations. We wanted to get at underlying issues such as: Are music solutions too limited and repetitive at present? Do people want to be involved and create music themselves? Does interactive music enhance interactive experiences and gaming? We put together a survey that asked questions focussing on these topics.
We submitted the survey to popular forums and social media, such as VR/gaming subreddits and Facebook groups. We gave attention to gamers and those interested in VR, such as players of Roblox (a game creation platform), and enthusiasts of social VR platforms, such as High Fidelity and VRChat.
We gathered lots of interesting data, much of which bodes well for the prospects of interactive and adaptive music. We compiled 179 respondents data from four demographics: Gamers + VR/AR users, Roblox gamers, and High Fidelity and VRChat communities.
Let’s look at the results a bit closer!
At Melodrive we are constantly trying to push the limits of the sonic and musical experience in modern games and particularly in the next generation of VR immersion. But game audio has come a long way since the early frontier days of tapes and cartridges, and in this post we take some time to look back at the history of machine-assisted, machine-generated or procedural music, highlighting its many challenges and innovations with respect to some key examples.
Not more than a Herculean stone throw away from the wonderful and iconic St Thomas Church, where Bach composed tons of sleek cantatas, lies the stately Hochschule fuer Musik und Theater, the regal setting of the video game musicology conference, Ludo2018. This is one of the field’s foremost annual meets, and took place last weekend (13-15 April) in Leipzig. The Ludo conference was errichtet by the first (and in my opinion, the best) research group in ludomusicology, the Ludomusicology Research Group, the Venerable Elders of which, Michiel Kamp, Tim Summers, Melanie Fritsch, and Mark Sweeney, drop scholars in wonderful locations in the UK and Europe to study what anyone in their right mind would love – music in video games, its history and its impact. Leipzig is a place of great historical, scientific, and artistic significance. Let’s see: Bach, Mendelssohn, Wagner, Goethe, Leibniz, etc., all have strong associations with this place (and ‘etc.’ means a lot in this context).
As a music theorist, and a new member of the Melodrive team, I’m fascinated about how music in video games and VR has evolved and continues to evolve today. This is important to me because it concerns how music has changed in terms of its organisation and sophistication over the years. It is clear, even to someone who is a casual gamer like myself, that the aesthetic significance and evolution of video game music are critical concerns for the history of music, and demand serious attention. Particularly so since from an academic perspective at least, the genre is a little more niche than it deserves to be.
VR is a highly immersive medium. No doubt about that. VR players don’t just see the world through a screen as is the case in normal video games, they are literally inside that world and can interact with its objects and environments in an intuitive way. Most of my friends who tried VR the first time where shocked by the experience. They told me that they completely lost track of time and that they felt as if they moved to another reality. Simply put, they were deeply immersed.
There are several factors that contribute to immersion in an experience. One that is very important, and at the same time is often overlooked, is music.
Composers have always thought that music has the ability to increase the level of immersion of players experiencing digital content, be it videos, video games or VR. For interactive content, composers like Guy Whitmore, who are at the forefront of music making in non-linear settings, know that adaptive music can make a big difference for immersion (check this post for an explanation of what we mean when we say adaptive music). The reasoning is quite simple. With adaptive music, no matter how the user behaves, the music is always in sync with the emotions portrayed in the visuals and in the storyline. Here’s an example. My jolly village gets attacked by dark knights. The music, being adaptive, dynamically shifts from happy to dramatic. The double bass kicks in and the chords get more aggressive. In other words, the audio elements of the experience reinforce the story told through the visuals. Composers suggest that this reinforced feedback between different elements of an interactive experience increases immersion. This is intuitive and sounds like a plausible hypothesis, but… it’s still a hypothesis. No one had yet tested it out in the real world — until now!
There is something special about NieR: Automata. Developed by Platinum Games and released in 2017, NieR: Automata is a sequel to the cult classic game NieR (Cavia, 2010). Set thousands of years in the future, NieR: Automata is an action role playing game where the player takes control of androids 2B, 9S and A2. Their aim is to rid the earth of alien machines and pave the way for the last humans, who have settled on the moon, to return to earth. Keichii Okabe, the composer for NieR/NieR: Automata and the Drakengard series, uses adaptive music rescored from previous games with aims to induce emotions within the player-character connection.
The famous entrepreneur Ray Kurzweil predicted that by 2029 brains will merge with machines, making people smarter than ever. Even if most of the time we don’t realise it, machines and artificial intelligence (AI) are already extending our capabilities. Think of the last time you visited a website in a language you can’t speak. I would guess you understood its content anyway, thanks to the decent translation provided by Google. What about the last time you asked an AI assistant (Siri, Alexa, Cortana etc.) to find information for you?
In this blog post series, I outline how AI can augment human composers. In particular, I’ll touch on the techniques and the opportunities that AI opens to games composers for adaptive music. (If you don’t know what adaptive music is, have a look at this post I wrote a few months ago for a brief introduction). This first post is going to prepare the field, discussing some of the limitations composers face when working with adaptive music.
We’re always thinking of great examples of game soundtracks here at Melodrive HQ. We decided to come up with our own personal list of the best-of-the-best when it comes to adaptive music in games.
If you’re not sure what we mean when we say ‘adaptive music’, you should check out one of our previous posts, where we talked about the idea in some detail. TL;DR, adaptive music is dynamic and ever-changing. It reacts to the player and the game to intensify the immersion and emotion in the game, and (hopefully) improves their experience.
By the way, this list is just in chronological order and by no means ranks the games.
Without further ado, here’s the list!
Last month, I had the honour of interviewing game composer Guy Whitmore. We shared ideas on video game music with a specific focus on the use of adaptive techniques in video games. He shared some great insights on the future of music making in video games. Guy has been around in the video games industry for more than 20 years. He has specialised in adaptive music. You can say he’s an adaptive music evangelist and educator! For a quick introduction on adaptive music, check this post I wrote some time ago. Guy worked as an audio director and a composer for big companies like Electronic Arts and Microsoft but also as a freelancer. He’s the author of notable game scores like Die Hard: Nakatomi Plaza, Shivers and Shogo. Next, you can read the content of our great chat.
We may not be aware of it, but most music that we’re used to listen to is linear. Linear music has a beginning, a development and an ending that sound exactly the same every time we listen to it. The soundtrack of a movie, the songs of Bob Dylan and a Mozart’s symphony are all examples of linear music. Linear music works great if it’s used for concert music or as musical background for fixed media. Every time we watch Star Wars, for instance, the sequence of events occurring on screen are always the same. No matter how much we would like it to be different, (40-year-old spoiler alert) Obi-Wan Kenobi is going to be killed by Darth Vader! The fixed structure of a movie is great for the composer who has to write the soundtrack, because he or she can create musical cues that are specifically tailored to the on-screen images on a moment-by-moment basis. When the cannons of the Millennium Falcon hit the ships of the Empire, for example, the explosions can be underlined by the music and the overall excitement of the moment can be captured and enhanced by the soundtrack.