There is something special about NieR: Automata. Developed by Platinum Games and released in 2017, NieR: Automata is a sequel to the cult classic game NieR (Cavia, 2010). Set thousands of years in the future, NieR: Automata is an action role playing game where the player takes control of androids 2B, 9S and A2. Their aim is to rid the earth of alien machines and pave the way for the last humans, who have settled on the moon, to return to earth. Keichii Okabe, the composer for NieR/NieR: Automata and the Drakengard series, uses adaptive music rescored from previous games with aims to induce emotions within the player-character connection.
The famous entrepreneur Ray Kurzweil predicted that by 2029 brains will merge with machines, making people smarter than ever. Even if most of the time we don’t realise it, machines and artificial intelligence (AI) are already extending our capabilities. Think of the last time you visited a website in a language you can’t speak. I would guess you understood its content anyway, thanks to the decent translation provided by Google. What about the last time you asked an AI assistant (Siri, Alexa, Cortana etc.) to find information for you?
In this blog post series, I outline how AI can augment human composers. In particular, I’ll touch on the techniques and the opportunities that AI opens to games composers for adaptive music. (If you don’t know what adaptive music is, have a look at this post I wrote a few months ago for a brief introduction). This first post is going to prepare the field, discussing some of the limitations composers face when working with adaptive music.