As a music theorist, and a new member of the Melodrive team, I’m fascinated about how music in video games and VR has evolved and continues to evolve today. This is important to me because it concerns how music has changed in terms of its organisation and sophistication over the years. It is clear, even to someone who is a casual gamer like myself, that the aesthetic significance and evolution of video game music are critical concerns for the history of music, and demand serious attention. Particularly so since from an academic perspective at least, the genre is a little more niche than it deserves to be.
The famous entrepreneur Ray Kurzweil predicted that by 2029 brains will merge with machines, making people smarter than ever. Even if most of the time we don’t realise it, machines and artificial intelligence (AI) are already extending our capabilities. Think of the last time you visited a website in a language you can’t speak. I would guess you understood its content anyway, thanks to the decent translation provided by Google. What about the last time you asked an AI assistant (Siri, Alexa, Cortana etc.) to find information for you?
In this blog post series, I outline how AI can augment human composers. In particular, I’ll touch on the techniques and the opportunities that AI opens to games composers for adaptive music. (If you don’t know what adaptive music is, have a look at this post I wrote a few months ago for a brief introduction). This first post is going to prepare the field, discussing some of the limitations composers face when working with adaptive music.