In the first part of this blog series, I talked about the streamer subculture and significant figures that have risen to notoriety. I looked at the common genres of music used, and discussed some key issues that have started to be connected to the niche activity. This blog will look more deeply into one of those problems, the use of automatic filtering, which is a game-changer for many users. I want to explore the creative, economical, and philosophical implications of this technology in more detail, and suggest possible solutions.
A background of background music in video game videos
A gaming subculture, video game streaming, has emerged that involves a fluid interaction between games, people, and music. It has been made possible through advancements in computing power and streaming technology, the development of games that can be played through the internet, and readily accessible music. If you’re from an older generation, you might think it’s loopy to watch other people play video games for fun. People of Generation Y (roughly born 1980-2000) – my generation – still mainly play conventional computer games in conventional social settings. Some are engaged in localised gaming communities, usually amongst a group of friends. Others get involved in amateur and professional societies and competitions, such as Super Smash Bros-playing groups, popular throughout the UK. More often, early-Generation Y-ers are less willing to broadcast our interactions with others. Only a select few feel confident enough in this artform to build on online subculture around it.
Video game music comes in a variety of forms, widened by the wealth of mixed media that comprise games themselves — gameplay, music, storytelling, game theory, art design, VR, etc. — which amount to an exuberant tapestry. As the video game industry increases its breadth and appeal, game music (and the games themselves) must become highly eclectic. In this post, we examine the use of music in indie and game jam games, looking at the niches and subcultures that exist therein.
We all experience music almost every day that gives rise to various emotions, though commercials, films, games or our personal connection with our favorite album. It is commonly known informally that major keys make happier melodies than minor keys, and different scales and performative elements can make music sound more sad or happy, depending on the characteristics of those elements.
But what is it about the sound of a low legato cello that makes it more sad than a high jumpy melody on a marimba? In this post, we take a look at how sounds can evoke emotions, starting with its building blocks.
We’ve conducted a number of studies in games and interactive media to understand what people think about music and interactive media.
Our main goal was to get received opinion on current music solutions, to understand thoughts on interactive and adaptive music, to know what people might want in terms of these ideas and to look at how they see Melodrive as fitting in with these aspirations. We wanted to get at underlying issues such as: Are music solutions too limited and repetitive at present? Do people want to be involved and create music themselves? Does interactive music enhance interactive experiences and gaming? We put together a survey that asked questions focussing on these topics.
We submitted the survey to popular forums and social media, such as VR/gaming subreddits and Facebook groups. We gave attention to gamers and those interested in VR, such as players of Roblox (a game creation platform), and enthusiasts of social VR platforms, such as High Fidelity and VRChat.
We gathered lots of interesting data, much of which bodes well for the prospects of interactive and adaptive music. We compiled 179 respondents data from four demographics: Gamers + VR/AR users, Roblox gamers, and High Fidelity and VRChat communities.
Let’s look at the results a bit closer!
#EmoJam VR Hack Winners!
We’ve just wrapped up a weekend of VR hacking with #EmoJam in San Francisco! We had a fantastic speaker line-up, and really interesting conversations all weekend long. We would like to thank our main sponsor, ARVR Academy, for supporting us, and Microsoft Reactor in SF for hosting us!
19th-20th May, Microsoft Reactor, San Francisco CA
Emotions, music and VR are central to what we do at Melodrive. That’s why we’ve decided to organise #EmoJam, a weekend dedicated to hacking emotion, VR, games and music. We have envisaged a hackathon that addresses perhaps the most salient problem for interactive storytelling, AI, and VR: emotion. Emotion is a notorious puzzle for just about any field that deals with the mind/brain. It is clear that we have emotions to help us survive in the world. Without these, we would certainly be a different, although perhaps nonetheless intelligent machine. On the whole, it seems that we are not usually just interested in what intelligence is and how to model it, but we’re specifically interested in a particular brand of intelligence, namely human intelligence. And emotion plays a large part in that.
When it comes to virtual reality (VR) experiences, “immersion” and “immersion multipliers” are highly valued for players and developers alike, but what do they really mean? What exactly is immersion and why is it seen as the holy grail for VR? In this post we try to get to the bottom of what makes someone feel immersed, and provide some quick strategies you can use to make your VR experiences a lot more immersive – by only thinking about audio.
VR is a highly immersive medium. No doubt about that. VR players don’t just see the world through a screen as is the case in normal video games, they are literally inside that world and can interact with its objects and environments in an intuitive way. Most of my friends who tried VR the first time where shocked by the experience. They told me that they completely lost track of time and that they felt as if they moved to another reality. Simply put, they were deeply immersed.
There are several factors that contribute to immersion in an experience. One that is very important, and at the same time is often overlooked, is music.
Composers have always thought that music has the ability to increase the level of immersion of players experiencing digital content, be it videos, video games or VR. For interactive content, composers like Guy Whitmore, who are at the forefront of music making in non-linear settings, know that adaptive music can make a big difference for immersion (check this post for an explanation of what we mean when we say adaptive music). The reasoning is quite simple. With adaptive music, no matter how the user behaves, the music is always in sync with the emotions portrayed in the visuals and in the storyline. Here’s an example. My jolly village gets attacked by dark knights. The music, being adaptive, dynamically shifts from happy to dramatic. The double bass kicks in and the chords get more aggressive. In other words, the audio elements of the experience reinforce the story told through the visuals. Composers suggest that this reinforced feedback between different elements of an interactive experience increases immersion. This is intuitive and sounds like a plausible hypothesis, but… it’s still a hypothesis. No one had yet tested it out in the real world — until now!
We had the opportunity to sit down with Brie Code at Silo Coffee in Friedrichshain, Berlin. Brie is a speaker, writer, and the CEO of a new game studio, Tru Luv Media. Before founding Tru Luv Media, Brie was an AI programmer–she built the AI for Company of Heroes (along with a colleague), and she was lead programmer for Child of Light and three Assassin’s Creed titles at Ubisoft in Montreal, Canada.
We highlighted some of Brie’s work in investigating reward systems in our last blog post, Approaching Feminism as a Male Data Scientist. She found that in addition to the traditional fight-or-flight response system, there was an overlooked reward system that stressful situations can evoke, called tend-and-befriend.