We stopped in sunny LA at Quincy Jones’ office to meet up with the incredible Jacob Collier and discuss musical bluffs, rhythmic cadences and mind mappings.

Before diving into our Q&A, Jacob and I had a wonderful talk about the future of technology in music. We found that our visions to be surprisingly aligned. Jacob is a man consumed by the mapping of emotion to different musical components. He said that he has always experienced and explored harmonies in a very emotional way–feeling out the different chords based almost purely on his personal perception. At Melodrive, one of our main tenets is to bridge the gap between computers and musical emotion.

Jacob had a lot of questions about how to approach musical problems with AI, and I happily obliged (it’s not everyone that wants to hear about this stuff!). One really insightful question he asked was the problem of musical bluffs. Often when musicians are communicating, they will reference some established technique–a harmonic cadence, a melodic pattern, or even a rhythm–that will create an expectation in the other participant. Then, they will deliberately thwart the completion of that pattern, so as to break your expectation.

How would computers be able to recognize a bluff on top of a bluff on top of a bluff, like a musician would?

Unfortunately I had no easy answer! What I did say is that, at Melodrive, we are often concerned with not just the musical material, but patterns within that musical material that establish a language in which the music can communicate. It is a huge challenge to create a machine that can not only establish patterns but also allude to them. Jacob hit nail on the head with that one.

Jacob also has a vision for creating a Virtual World where people can explore musical emotion in a physical space. At Melodrive, we expect that our product will be able to do exactly that.

Q & A

You collaborated with MIT and Ben Bloomberg to create your own unique performance rig. What was your motivation with a project like that?

The whole idea behind the circular performance rig was that I wanted to re-create the musical space that I have at home–I wanted to be able to  tour with my room.  At home, my room is my instrument, so this rig was my way of bringing my room on the road with me.

Jacob was inspired by Beardyman’s vocal looping rig, the Beardytron. Beardyman uses live looping effects to create different instrumental sounds using only his voice and a suite of audio effects. Check out a demonstration below.

The advantage that Jacob has is that he can play multiple instruments. In Jacob’s rig, he switches seamlessly from his vocal harmoniser, to keyboards, to percussive instruments, upright or electric bass, drums, guitar, and melodica, singing every step of the way.

Specifically, what were the things you wanted in your live rig that you didn’t think were possible with existing technologies?

The harmoniser didn’t exist. The best harmonisers out there only allow for 4 simultaneous parts. The one we built has the ability for 12 parts.

Do you play chords that contain 12 simultaneous notes?

There are times when I do play 12 notes at once, if I’m going for that effect. What was more important was that if I’m transitioning between two 8-part chords, there is often some overlap in when you have the notes pressed, so having 12 voices allows to transition between those chords.

I wrote a review of your performance earlier, where I tried to kind of get under the hood with your live-looping tech based on both observations and descriptions of the system from both you and Ben Bloomberg. Would you be willing to talk about some of the mechanics of your live-looping system?

Yes absolutely. Our goal with the live performance rig was to let the musician perform as much of the music as possible. Performances are all about interacting with people, and we wanted to keep that as the focus. Our system is sort of a MIDI on rails solution, that has specific timing constraints, but it’s also able to have improvised parts where the same section will loop based on whether a certain trigger is or isn’t hit. What it amounts to is one giant Ableton [Live] set. Before each show, I’ll often go in and change certain arrangements, add parts and make it new. During the show I’m constantly counting measures and following the set.

It does seem that it’s a lot of (time-sensitive) work to keep up with all of the ongoing loops and to control the flow. Do you find that to be a challenge?

For me, the counting and memory part is natural, since you’re trained to do that as a musician. The biggest challenge was to be able to express emotion within that framework, so that I can keep the audience engaged. For example, I had to learn how to not only switch to a new instrument, but also to emphasise that new sound with my performance, using gestures and body language to accent that moment.

What are your favourite musical tools for composition and recording?

Logic is my jam. I got it for 11th birthday, and by now I’ve re-mapped all the keyboard shortcuts. I even have conversations with the Logic team because I find small bugs from time to time.

Have you ever experimented with coding? Algorithmic Composition? Are you interested in doing so?

I have with regards to other things, for example, in [my collaboration with] Ben [Bloomberg] and Will Young, who did the visual elements of my performance rig. As far as personally coding, I’ve been working on a pet project that involves some coding. I’ve always been intrigued by words that evoke really vivid sensational experiences in your mind. These are mainly words that describe physical substances, like

  • bile
  • dreadlocks
  • egg yolk

I wanted to explore those sensations, so I created a program that will randomly create compound words using those very vivid words. My goal was to see how the sensation changed when combining multiple of these words. I call them MindTextures. That’s the majority of my coding experience, in that program.

In your performance with MIT for Sonic Bloom Mountain, you were actually sending music notation in realtime. Can you describe the technological setup you had there?

What we realised very quickly when working together at MIT, is that you need something simple that you can do. Many ideas were thrown around, but when you actually implement them, they are often different than what you expected, so it’s best to keep it simple and get something to work, and then to test and modify.
For the system that we used in Sonic Bloom Mountain, there was almost a second of latency. For polyphonic parts, we had an auto-arranger that would send different notes in a chord to different instruments. We also had different settings for polyphonic parts, some of which would gather all the notes played over a certain time period and send them as a single chord, and others that would respect the order in which notes were played. I was also able to send text-based cues like “flutter” or “swell”. My hope there was to communicate emotional nuances and expressions.

Do you have any plans to learn harmonica? It seems like you use melodica for solo breaks in a similar way that Stevie Wonder uses harmonica.

The problem with the harmonica is that you’re always in the shadow of Stevie Wonder. I also never learned because you can do more with the melodica, since it’s fully chromatic. But, there is a moment of harmonica in one of my songs:

I played with an amazing blues musician at the Montreux Jazz Festival, and he insisted on gifting me one of his harmonicas, despite me telling him that I couldn’t play one. Since I didn’t play harmonica, I had to be creative with how to use it. When I made Fascinating Rhythm, I decided to use it, so there’s a tiny part of harmonica in the rhythmic texture of one of the parts. Often when people give me musical gifts, I tend to hide them in things

In our field, there’s a lot of exploration about what it means for a machine to be creative. How do you feel about that as a musician and composer?

I think there’s a lot to explore there. Computers are really good at doing certain things, like searching vast spaces and finding paths. Humans are much better at emotional communication, performance and expression. One of the only reasons for humans to exist is to be emotional. I think we should let the machine do what it does best, and let humans do what they do best, and if you’re making a machine that enhances humanity, then you’re doing the right thing.