When it comes to virtual reality (VR) experiences, “immersion” and “immersion multipliers” are highly valued for players and developers alike, but what do they really mean? What exactly is immersion and why is it seen as the holy grail for VR? In this post we try to get to the bottom of what makes someone feel immersed, and provide some quick strategies you can use to make your VR experiences a lot more immersive – by only thinking about audio.
As a music theorist, and a new member of the Melodrive team, I’m fascinated about how music in video games and VR has evolved and continues to evolve today. This is important to me because it concerns how music has changed in terms of its organisation and sophistication over the years. It is clear, even to someone who is a casual gamer like myself, that the aesthetic significance and evolution of video game music are critical concerns for the history of music, and demand serious attention. Particularly so since from an academic perspective at least, the genre is a little more niche than it deserves to be.
VR is a highly immersive medium. No doubt about that. VR players don’t just see the world through a screen as is the case in normal video games, they are literally inside that world and can interact with its objects and environments in an intuitive way. Most of my friends who tried VR the first time where shocked by the experience. They told me that they completely lost track of time and that they felt as if they moved to another reality. Simply put, they were deeply immersed.
There are several factors that contribute to immersion in an experience. One that is very important, and at the same time is often overlooked, is music.
Composers have always thought that music has the ability to increase the level of immersion of players experiencing digital content, be it videos, video games or VR. For interactive content, composers like Guy Whitmore, who are at the forefront of music making in non-linear settings, know that adaptive music can make a big difference for immersion (check this post for an explanation of what we mean when we say adaptive music). The reasoning is quite simple. With adaptive music, no matter how the user behaves, the music is always in sync with the emotions portrayed in the visuals and in the storyline. Here’s an example. My jolly village gets attacked by dark knights. The music, being adaptive, dynamically shifts from happy to dramatic. The double bass kicks in and the chords get more aggressive. In other words, the audio elements of the experience reinforce the story told through the visuals. Composers suggest that this reinforced feedback between different elements of an interactive experience increases immersion. This is intuitive and sounds like a plausible hypothesis, but… it’s still a hypothesis. No one had yet tested it out in the real world — until now!
We had the opportunity to sit down with Brie Code at Silo Coffee in Friedrichshain, Berlin. Brie is a speaker, writer, and the CEO of a new game studio, Tru Luv Media. Before founding Tru Luv Media, Brie was an AI programmer–she built the AI for Company of Heroes (along with a colleague), and she was lead programmer for Child of Light and three Assassin’s Creed titles at Ubisoft in Montreal, Canada.
We highlighted some of Brie’s work in investigating reward systems in our last blog post, Approaching Feminism as a Male Data Scientist. She found that in addition to the traditional fight-or-flight response system, there was an overlooked reward system that stressful situations can evoke, called tend-and-befriend.
Painting Equality used with permission from artist Osnat Tzadok, find original here.
The internet has provided a new platform for an obscene amount of information. Anyone with a computer and a connection can now be heard in the international community. Through the accessibility of information, citizens have become journalists, comedians, celebrities, laughing stocks, community leaders and even scientists just through the means of access to this information tsunami. One particular aspect of this is the ability for marginalized groups to directly confront those who are more privileged, or even perpetrate that marginalization. Continue reading
There is something special about NieR: Automata. Developed by Platinum Games and released in 2017, NieR: Automata is a sequel to the cult classic game NieR (Cavia, 2010). Set thousands of years in the future, NieR: Automata is an action role playing game where the player takes control of androids 2B, 9S and A2. Their aim is to rid the earth of alien machines and pave the way for the last humans, who have settled on the moon, to return to earth. Keichii Okabe, the composer for NieR/NieR: Automata and the Drakengard series, uses adaptive music rescored from previous games with aims to induce emotions within the player-character connection.
The famous entrepreneur Ray Kurzweil predicted that by 2029 brains will merge with machines, making people smarter than ever. Even if most of the time we don’t realise it, machines and artificial intelligence (AI) are already extending our capabilities. Think of the last time you visited a website in a language you can’t speak. I would guess you understood its content anyway, thanks to the decent translation provided by Google. What about the last time you asked an AI assistant (Siri, Alexa, Cortana etc.) to find information for you?
In this blog post series, I outline how AI can augment human composers. In particular, I’ll touch on the techniques and the opportunities that AI opens to games composers for adaptive music. (If you don’t know what adaptive music is, have a look at this post I wrote a few months ago for a brief introduction). This first post is going to prepare the field, discussing some of the limitations composers face when working with adaptive music.
We’re always thinking of great examples of game soundtracks here at Melodrive HQ. We decided to come up with our own personal list of the best-of-the-best when it comes to adaptive music in games.
If you’re not sure what we mean when we say ‘adaptive music’, you should check out one of our previous posts, where we talked about the idea in some detail. TL;DR, adaptive music is dynamic and ever-changing. It reacts to the player and the game to intensify the immersion and emotion in the game, and (hopefully) improves their experience.
By the way, this list is just in chronological order and by no means ranks the games.
Without further ado, here’s the list!
Last month, I had the honour of interviewing game composer Guy Whitmore. We shared ideas on video game music with a specific focus on the use of adaptive techniques in video games. He shared some great insights on the future of music making in video games. Guy has been around in the video games industry for more than 20 years. He has specialised in adaptive music. You can say he’s an adaptive music evangelist and educator! For a quick introduction on adaptive music, check this post I wrote some time ago. Guy worked as an audio director and a composer for big companies like Electronic Arts and Microsoft but also as a freelancer. He’s the author of notable game scores like Die Hard: Nakatomi Plaza, Shivers and Shogo. Next, you can read the content of our great chat.
We stopped in sunny LA at Quincy Jones’ office to meet up with the incredible Jacob Collier and discuss musical bluffs, rhythmic cadences and mind mappings.
Before diving into our Q&A, Jacob and I had a wonderful talk about the future of technology in music. We found that our visions to be surprisingly aligned. Jacob is a man consumed by the mapping of emotion to different musical components. He said that he has always experienced and explored harmonies in a very emotional way–feeling out the different chords based almost purely on his personal perception. At Melodrive, one of our main tenets is to bridge the gap between computers and musical emotion.