We all experience music almost every day that gives rise to various emotions, though commercials, films, games or our personal connection with our favorite album. It is commonly known informally that major keys make happier melodies than minor keys, and different scales and performative elements can make music sound more sad or happy, depending on the characteristics of those elements.
But what is it about the sound of a low legato cello that makes it more sad than a high jumpy melody on a marimba? In this post, we take a look at how sounds can evoke emotions, starting with its building blocks.
We’ve conducted a number of studies in games and interactive media to understand what people think about music and interactive media.
Our main goal was to get received opinion on current music solutions, to understand thoughts on interactive and adaptive music, to know what people might want in terms of these ideas and to look at how they see Melodrive as fitting in with these aspirations. We wanted to get at underlying issues such as: Are music solutions too limited and repetitive at present? Do people want to be involved and create music themselves? Does interactive music enhance interactive experiences and gaming? We put together a survey that asked questions focussing on these topics.
We submitted the survey to popular forums and social media, such as VR/gaming subreddits and Facebook groups. We gave attention to gamers and those interested in VR, such as players of Roblox (a game creation platform), and enthusiasts of social VR platforms, such as High Fidelity and VRChat.
We gathered lots of interesting data, much of which bodes well for the prospects of interactive and adaptive music. We compiled 179 respondents data from four demographics: Gamers + VR/AR users, Roblox gamers, and High Fidelity and VRChat communities.
Let’s look at the results a bit closer!
#EmoJam VR Hack Winners!
We’ve just wrapped up a weekend of VR hacking with #EmoJam in San Francisco! We had a fantastic speaker line-up, and really interesting conversations all weekend long. We would like to thank our main sponsor, ARVR Academy, for supporting us, and Microsoft Reactor in SF for hosting us!
19th-20th May, Microsoft Reactor, San Francisco CA
Emotions, music and VR are central to what we do at Melodrive. That’s why we’ve decided to organise #EmoJam, a weekend dedicated to hacking emotion, VR, games and music. We have envisaged a hackathon that addresses perhaps the most salient problem for interactive storytelling, AI, and VR: emotion. Emotion is a notorious puzzle for just about any field that deals with the mind/brain. It is clear that we have emotions to help us survive in the world. Without these, we would certainly be a different, although perhaps nonetheless intelligent machine. On the whole, it seems that we are not usually just interested in what intelligence is and how to model it, but we’re specifically interested in a particular brand of intelligence, namely human intelligence. And emotion plays a large part in that.
When it comes to virtual reality (VR) experiences, “immersion” and “immersion multipliers” are highly valued for players and developers alike, but what do they really mean? What exactly is immersion and why is it seen as the holy grail for VR? In this post we try to get to the bottom of what makes someone feel immersed, and provide some quick strategies you can use to make your VR experiences a lot more immersive – by only thinking about audio.
VR is a highly immersive medium. No doubt about that. VR players don’t just see the world through a screen as is the case in normal video games, they are literally inside that world and can interact with its objects and environments in an intuitive way. Most of my friends who tried VR the first time where shocked by the experience. They told me that they completely lost track of time and that they felt as if they moved to another reality. Simply put, they were deeply immersed.
There are several factors that contribute to immersion in an experience. One that is very important, and at the same time is often overlooked, is music.
Composers have always thought that music has the ability to increase the level of immersion of players experiencing digital content, be it videos, video games or VR. For interactive content, composers like Guy Whitmore, who are at the forefront of music making in non-linear settings, know that adaptive music can make a big difference for immersion (check this post for an explanation of what we mean when we say adaptive music). The reasoning is quite simple. With adaptive music, no matter how the user behaves, the music is always in sync with the emotions portrayed in the visuals and in the storyline. Here’s an example. My jolly village gets attacked by dark knights. The music, being adaptive, dynamically shifts from happy to dramatic. The double bass kicks in and the chords get more aggressive. In other words, the audio elements of the experience reinforce the story told through the visuals. Composers suggest that this reinforced feedback between different elements of an interactive experience increases immersion. This is intuitive and sounds like a plausible hypothesis, but… it’s still a hypothesis. No one had yet tested it out in the real world — until now!
We had the opportunity to sit down with Brie Code at Silo Coffee in Friedrichshain, Berlin. Brie is a speaker, writer, and the CEO of a new game studio, Tru Luv Media. Before founding Tru Luv Media, Brie was an AI programmer–she built the AI for Company of Heroes (along with a colleague), and she was lead programmer for Child of Light and three Assassin’s Creed titles at Ubisoft in Montreal, Canada.
We highlighted some of Brie’s work in investigating reward systems in our last blog post, Approaching Feminism as a Male Data Scientist. She found that in addition to the traditional fight-or-flight response system, there was an overlooked reward system that stressful situations can evoke, called tend-and-befriend.
We’re always thinking of great examples of game soundtracks here at Melodrive HQ. We decided to come up with our own personal list of the best-of-the-best when it comes to adaptive music in games.
If you’re not sure what we mean when we say ‘adaptive music’, you should check out one of our previous posts, where we talked about the idea in some detail. TL;DR, adaptive music is dynamic and ever-changing. It reacts to the player and the game to intensify the immersion and emotion in the game, and (hopefully) improves their experience.
By the way, this list is just in chronological order and by no means ranks the games.
Without further ado, here’s the list!
Last month, I had the honour of interviewing game composer Guy Whitmore. We shared ideas on video game music with a specific focus on the use of adaptive techniques in video games. He shared some great insights on the future of music making in video games. Guy has been around in the video games industry for more than 20 years. He has specialised in adaptive music. You can say he’s an adaptive music evangelist and educator! For a quick introduction on adaptive music, check this post I wrote some time ago. Guy worked as an audio director and a composer for big companies like Electronic Arts and Microsoft but also as a freelancer. He’s the author of notable game scores like Die Hard: Nakatomi Plaza, Shivers and Shogo. Next, you can read the content of our great chat.
We stopped in sunny LA at Quincy Jones’ office to meet up with the incredible Jacob Collier and discuss musical bluffs, rhythmic cadences and mind mappings.
Before diving into our Q&A, Jacob and I had a wonderful talk about the future of technology in music. We found that our visions to be surprisingly aligned. Jacob is a man consumed by the mapping of emotion to different musical components. He said that he has always experienced and explored harmonies in a very emotional way–feeling out the different chords based almost purely on his personal perception. At Melodrive, one of our main tenets is to bridge the gap between computers and musical emotion.