This talk is about applying deep learning to music. We will look at the raw music data and discover the following:
Instead of applying it to existing music we will generate our own music using some simple musical rules. The benefit of this is that we are in control of the complexity and we know exactly what is being played. We start out simple and then start adding more instruments, different timbres, etc. As we go up in complexity, we shall see how to adapt our models to be able to deal with it. This gives interesting insights in what structures in deep nets work well.
I will show:
For more info, see the github repository at https://github.com/marcelraas/music-generator