#EmoJam VR Hack Winners!
We’ve just wrapped up a weekend of VR hacking with #EmoJam in San Francisco! We had a fantastic speaker line-up, and really interesting conversations all weekend long. We would like to thank our main sponsor, ARVR Academy, for supporting us, and Microsoft Reactor in SF for hosting us!
19th-20th May, Microsoft Reactor, San Francisco CA
Emotions, music and VR are central to what we do at Melodrive. That’s why we’ve decided to organise #EmoJam, a weekend dedicated to hacking emotion, VR, games and music. We have envisaged a hackathon that addresses perhaps the most salient problem for interactive storytelling, AI, and VR: emotion. Emotion is a notorious puzzle for just about any field that deals with the mind/brain. It is clear that we have emotions to help us survive in the world. Without these, we would certainly be a different, although perhaps nonetheless intelligent machine. On the whole, it seems that we are not usually just interested in what intelligence is and how to model it, but we’re specifically interested in a particular brand of intelligence, namely human intelligence. And emotion plays a large part in that.
When it comes to virtual reality (VR) experiences, “immersion” and “immersion multipliers” are highly valued for players and developers alike, but what do they really mean? What exactly is immersion and why is it seen as the holy grail for VR? In this post we try to get to the bottom of what makes someone feel immersed, and provide some quick strategies you can use to make your VR experiences a lot more immersive – by only thinking about audio.
VR is a highly immersive medium. No doubt about that. VR players don’t just see the world through a screen as is the case in normal video games, they are literally inside that world and can interact with its objects and environments in an intuitive way. Most of my friends who tried VR the first time where shocked by the experience. They told me that they completely lost track of time and that they felt as if they moved to another reality. Simply put, they were deeply immersed.
There are several factors that contribute to immersion in an experience. One that is very important, and at the same time is often overlooked, is music.
Composers have always thought that music has the ability to increase the level of immersion of players experiencing digital content, be it videos, video games or VR. For interactive content, composers like Guy Whitmore, who are at the forefront of music making in non-linear settings, know that adaptive music can make a big difference for immersion (check this post for an explanation of what we mean when we say adaptive music). The reasoning is quite simple. With adaptive music, no matter how the user behaves, the music is always in sync with the emotions portrayed in the visuals and in the storyline. Here’s an example. My jolly village gets attacked by dark knights. The music, being adaptive, dynamically shifts from happy to dramatic. The double bass kicks in and the chords get more aggressive. In other words, the audio elements of the experience reinforce the story told through the visuals. Composers suggest that this reinforced feedback between different elements of an interactive experience increases immersion. This is intuitive and sounds like a plausible hypothesis, but… it’s still a hypothesis. No one had yet tested it out in the real world — until now!
We had the opportunity to sit down with Brie Code at Silo Coffee in Friedrichshain, Berlin. Brie is a speaker, writer, and the CEO of a new game studio, Tru Luv Media. Before founding Tru Luv Media, Brie was an AI programmer–she built the AI for Company of Heroes (along with a colleague), and she was lead programmer for Child of Light and three Assassin’s Creed titles at Ubisoft in Montreal, Canada.
We highlighted some of Brie’s work in investigating reward systems in our last blog post, Approaching Feminism as a Male Data Scientist. She found that in addition to the traditional fight-or-flight response system, there was an overlooked reward system that stressful situations can evoke, called tend-and-befriend.
We’re always thinking of great examples of game soundtracks here at Melodrive HQ. We decided to come up with our own personal list of the best-of-the-best when it comes to adaptive music in games.
If you’re not sure what we mean when we say ‘adaptive music’, you should check out one of our previous posts, where we talked about the idea in some detail. TL;DR, adaptive music is dynamic and ever-changing. It reacts to the player and the game to intensify the immersion and emotion in the game, and (hopefully) improves their experience.
By the way, this list is just in chronological order and by no means ranks the games.
Without further ado, here’s the list!
Last month, I had the honour of interviewing game composer Guy Whitmore. We shared ideas on video game music with a specific focus on the use of adaptive techniques in video games. He shared some great insights on the future of music making in video games. Guy has been around in the video games industry for more than 20 years. He has specialised in adaptive music. You can say he’s an adaptive music evangelist and educator! For a quick introduction on adaptive music, check this post I wrote some time ago. Guy worked as an audio director and a composer for big companies like Electronic Arts and Microsoft but also as a freelancer. He’s the author of notable game scores like Die Hard: Nakatomi Plaza, Shivers and Shogo. Next, you can read the content of our great chat.
We stopped in sunny LA at Quincy Jones’ office to meet up with the incredible Jacob Collier and discuss musical bluffs, rhythmic cadences and mind mappings.
Before diving into our Q&A, Jacob and I had a wonderful talk about the future of technology in music. We found that our visions to be surprisingly aligned. Jacob is a man consumed by the mapping of emotion to different musical components. He said that he has always experienced and explored harmonies in a very emotional way–feeling out the different chords based almost purely on his personal perception. At Melodrive, one of our main tenets is to bridge the gap between computers and musical emotion.
We may not be aware of it, but most music that we’re used to listen to is linear. Linear music has a beginning, a development and an ending that sound exactly the same every time we listen to it. The soundtrack of a movie, the songs of Bob Dylan and a Mozart’s symphony are all examples of linear music. Linear music works great if it’s used for concert music or as musical background for fixed media. Every time we watch Star Wars, for instance, the sequence of events occurring on screen are always the same. No matter how much we would like it to be different, (40-year-old spoiler alert) Obi-Wan Kenobi is going to be killed by Darth Vader! The fixed structure of a movie is great for the composer who has to write the soundtrack, because he or she can create musical cues that are specifically tailored to the on-screen images on a moment-by-moment basis. When the cannons of the Millennium Falcon hit the ships of the Empire, for example, the explosions can be underlined by the music and the overall excitement of the moment can be captured and enhanced by the soundtrack.