Interview with Grammy-winning composer Jacob Collier

We stopped in sunny LA at Quincy Jones’ office to meet up with the incredible Jacob Collier and discuss musical bluffs, rhythmic cadences and mind mappings.

Before diving into our Q&A, Jacob and I had a wonderful talk about the future of technology in music. We found that our visions to be surprisingly aligned. Jacob is a man consumed by the mapping of emotion to different musical components. He said that he has always experienced and explored harmonies in a very emotional way–feeling out the different chords based almost purely on his personal perception. At Melodrive, one of our main tenets is to bridge the gap between computers and musical emotion.

Jacob had a lot of questions about how to approach musical problems with AI, and I happily obliged (it’s not everyone that wants to hear about this stuff!). One really insightful question he asked was the problem of musical bluffs. Often when musicians are communicating, they will reference some established technique–a harmonic cadence, a melodic pattern, or even a rhythm–that will create an expectation in the other participant. Then, they will deliberately thwart the completion of that pattern, so as to break your expectation.

How would computers be able to recognize a bluff on top of a bluff on top of a bluff, like a musician would?

Unfortunately I had no easy answer! What I did say is that, at Melodrive, we are often concerned with not just the musical material, but patterns within that musical material that establish a language in which the music can communicate. It is a huge challenge to create a machine that can not only establish patterns but also allude to them. Jacob hit nail on the head with that one.

Jacob also has a vision for creating a Virtual World where people can explore musical emotion in a physical space. At Melodrive, we expect that our product will be able to do exactly that.

Q & A

You collaborated with MIT and Ben Bloomberg to create your own unique performance rig. What was your motivation with a project like that?

The whole idea behind the circular performance rig was that I wanted to re-create the musical space that I have at home–I wanted to be able to  tour with my room.  At home, my room is my instrument, so this rig was my way of bringing my room on the road with me.

Jacob was inspired by Beardyman’s vocal looping rig, the Beardytron. Beardyman uses live looping effects to create different instrumental sounds using only his voice and a suite of audio effects. Check out a demonstration below.

The advantage that Jacob has is that he can play multiple instruments. In Jacob’s rig, he switches seamlessly from his vocal harmoniser, to keyboards, to percussive instruments, upright or electric bass, drums, guitar, and melodica, singing every step of the way.

Specifically, what were the things you wanted in your live rig that you didn’t think were possible with existing technologies?

The harmoniser didn’t exist. The best harmonisers out there only allow for 4 simultaneous parts. The one we built has the ability for 12 parts.

Do you play chords that contain 12 simultaneous notes?

There are times when I do play 12 notes at once, if I’m going for that effect. What was more important was that if I’m transitioning between two 8-part chords, there is often some overlap in when you have the notes pressed, so having 12 voices allows to transition between those chords.

I wrote a review of your performance earlier, where I tried to kind of get under the hood with your live-looping tech based on both observations and descriptions of the system from both you and Ben Bloomberg. Would you be willing to talk about some of the mechanics of your live-looping system?

Yes absolutely. Our goal with the live performance rig was to let the musician perform as much of the music as possible. Performances are all about interacting with people, and we wanted to keep that as the focus. Our system is sort of a MIDI on rails solution, that has specific timing constraints, but it’s also able to have improvised parts where the same section will loop based on whether a certain trigger is or isn’t hit. What it amounts to is one giant Ableton [Live] set. Before each show, I’ll often go in and change certain arrangements, add parts and make it new. During the show I’m constantly counting measures and following the set.

It does seem that it’s a lot of (time-sensitive) work to keep up with all of the ongoing loops and to control the flow. Do you find that to be a challenge?

For me, the counting and memory part is natural, since you’re trained to do that as a musician. The biggest challenge was to be able to express emotion within that framework, so that I can keep the audience engaged. For example, I had to learn how to not only switch to a new instrument, but also to emphasise that new sound with my performance, using gestures and body language to accent that moment.

What are your favourite musical tools for composition and recording?

Logic is my jam. I got it for 11th birthday, and by now I’ve re-mapped all the keyboard shortcuts. I even have conversations with the Logic team because I find small bugs from time to time.

Have you ever experimented with coding? Algorithmic Composition? Are you interested in doing so?

I have with regards to other things, for example, in [my collaboration with] Ben [Bloomberg] and Will Young, who did the visual elements of my performance rig. As far as personally coding, I’ve been working on a pet project that involves some coding. I’ve always been intrigued by words that evoke really vivid sensational experiences in your mind. These are mainly words that describe physical substances, like

  • bile
  • dreadlocks
  • egg yolk

I wanted to explore those sensations, so I created a program that will randomly create compound words using those very vivid words. My goal was to see how the sensation changed when combining multiple of these words. I call them MindTextures. That’s the majority of my coding experience, in that program.

In your performance with MIT for Sonic Bloom Mountain, you were actually sending music notation in realtime. Can you describe the technological setup you had there?

What we realised very quickly when working together at MIT, is that you need something simple that you can do. Many ideas were thrown around, but when you actually implement them, they are often different than what you expected, so it’s best to keep it simple and get something to work, and then to test and modify.
For the system that we used in Sonic Bloom Mountain, there was almost a second of latency. For polyphonic parts, we had an auto-arranger that would send different notes in a chord to different instruments. We also had different settings for polyphonic parts, some of which would gather all the notes played over a certain time period and send them as a single chord, and others that would respect the order in which notes were played. I was also able to send text-based cues like “flutter” or “swell”. My hope there was to communicate emotional nuances and expressions.

Do you have any plans to learn harmonica? It seems like you use melodica for solo breaks in a similar way that Stevie Wonder uses harmonica.

The problem with the harmonica is that you’re always in the shadow of Stevie Wonder. I also never learned because you can do more with the melodica, since it’s fully chromatic. But, there is a moment of harmonica in one of my songs:

I played with an amazing blues musician at the Montreux Jazz Festival, and he insisted on gifting me one of his harmonicas, despite me telling him that I couldn’t play one. Since I didn’t play harmonica, I had to be creative with how to use it. When I made Fascinating Rhythm, I decided to use it, so there’s a tiny part of harmonica in the rhythmic texture of one of the parts. Often when people give me musical gifts, I tend to hide them in things

In our field, there’s a lot of exploration about what it means for a machine to be creative. How do you feel about that as a musician and composer?

I think there’s a lot to explore there. Computers are really good at doing certain things, like searching vast spaces and finding paths. Humans are much better at emotional communication, performance and expression. One of the only reasons for humans to exist is to be emotional. I think we should let the machine do what it does best, and let humans do what they do best, and if you’re making a machine that enhances humanity, then you’re doing the right thing. 

How to use Pure Data in Unity

Ever wanted to do more with the music or SFX in your game? Maybe you want to go beyond triggering audio clips with basic effects towards infinite variations of explosions or gunfire? Maybe your player characters are robots and you want to vocode the player’s microphone input? Perhaps you want complete playable instruments within your game, or unique melodies composed for each user-generated character a la Spore?

If so, then using Pure Data (Pd for short) may be just what you need. Sure, you can do a lot of these things using FMOD and Wwise, but Pd makes the process so simple and elegant, and best of all: it’s free. If this sounds like your cup of tea, then read on!

Continue reading

How to Choose the Right Music Genre for the Soundtrack of a Video Game

It might seem a like simple process, but picking the right music genre for a game soundtrack is a challenging task. The musical styles are almost infinite: free jazz, fusion, epic rock, late romantic, Gregorian,  gypsy folk; to list just a few options available. Should you use a traditional classical orchestral style for your new RPG game or should you try an unexpected solution like trance music? As we know, music can make or break a game and the genre plays a major part in the process. In this article, I’ll give you some guidelines,  inspired by the great book A Composer’s Guide to Game Music written by Winfred Phillips, on how to pick a music genre for your game that will (hopefully) resonate with your players. Before delving into this, let’s have a short detour on game genres, which, as we’ll see, are deeply intertwined with music genres.

Continue reading

How to Plan the Soundtrack of a Video Game Effectively: A Guide to Music Conceptualisation

The first step to create a good score for a game is to conceptualise the music. Music conceptualisation can be compared to sketching the blueprint for a building. Before you get into the details of how to decorate the rooms of the building, you need to decide how many floors there are, the size of each floor and the number of rooms. Similarly, music conceptualisation is necessary to set the stylistic, creative and functional goals of the music before the composer starts working on the actual notes. Consider conceptualisation as a high-level music planning activity.

Continue reading

5 Methods To Get Music for your Video Game: Pros and Cons

In this post I’m going to analyse the pros and cons of some of the strategies available to get music for video games, and help you decide which might be the best for you.

We all know that music is particularly important for video games. A soundtrack sets the overall tone of the game and it shapes the experience of a player without them really noticing it. Music can also provide clues about the situation the player is in and make a game stand out from the crowd. Think of the Skyrim main theme for example… you probably just need the initial three drum hits to recognise the game. That’s great branding!

Continue reading

Technology in Music Performance, a Review: Jacob Collier

This week, I had the opportunity to see a unique musical act in Mannheim, Germany. Specifically, the one-man show put together by up-and-coming jazz phenom, Jacob Collier. Jacob Collier, for those that don’t know, gained notoriety through his creation of multi-part YouTube videos for which he not only composes original and creative arrangements of classic songs, but also plays every instrument in the mix. His style is characterised by extensive use of vocal layering/a cappella arrangements, versatile and extensive re-harmonisations, polyrhythms and metric modulations, all rooted in jazz/funk style inspired by the hit composers of previous decades (Stevie Wonder, Herbie Hancock, Michael Jackson, the Beatles, etc.).

Continue reading

What Makes a Great Melody? 7 Lessons Learned from UNDERTALE

When we listen to a melody, we’re usually able to understand if it sounds good or bad after just a few notes are played. The underlying cognitive processing employed to arrive at such an assessment – although extremely sophisticated – is carried out on an almost unconscious level. It all feels so natural that we don’t realise how many simultaneous elements are contributing to the overall experience. Indeed, a melody is a complex musical construct that involves many musical domains at once. There is pitch content involved, obviously. But a melody also comprises note durations, rhythmic and metrical elements, accents, structural relationships between the different subsets of the melody, articulations and dynamics.

Continue reading

What is Adaptive Music?

We may not be aware of it, but most music that we’re used to listen to is linear. Linear music has a beginning, a development and an ending that sound exactly the same every time we listen to it. The soundtrack of a movie, the songs of Bob Dylan and a Mozart’s symphony are all examples of linear music. Linear music works great if it’s used for concert music or as  musical background for fixed media. Every time we watch Star Wars, for instance, the sequence of events occurring on screen are always the same. No matter how much we would like it to be different, (40-year-old spoiler alert) Obi-Wan Kenobi is going to be killed by Darth Vader! The fixed structure of a movie is great for the composer who has to write the soundtrack, because he or she can create musical cues that are specifically tailored to the on-screen images on a moment-by-moment basis. When the cannons of the Millennium Falcon hit the ships of the Empire, for example, the explosions can be underlined by the music and the overall excitement of the moment can be captured and enhanced by the soundtrack.

Continue reading

© 2017 Melodrive blog