EuroPython 2018

Change music in two epochs

Speaker(s) Marcel Raas

This talk is about applying deep learning to music. We will look at the raw music data and discover the following:

  • How to detect instruments from a piece of music
  • How to detect what is being played by what instrument
  • How to isolate instruments in multi-instrument (polyphonic) music

Instead of applying it to existing music we will generate our own music using some simple musical rules. The benefit of this is that we are in control of the complexity and we know exactly what is being played. We start out simple and then start adding more instruments, different timbres, etc. As we go up in complexity, we shall see how to adapt our models to be able to deal with it. This gives interesting insights in what structures in deep nets work well.

I will show:

  • How to build a simple synthesizer using numpy
  • How to create an unlimited data set of improvisation that sounds musical
  • How to use this data set for detecting instruments using deep learning
  • How to filter out one instrument when multiple synthesizers are playing at once

For more info, see the github repository at https://github.com/marcelraas/music-generator

in on Friday 27 July at 12:10 See schedule

Do you have some questions on this talk?

New comment