Part 2: SynthWaveML

Click here to see the full project in action 👀

This is the second edition of a project inspired by professor Scott Easley of USC (portfolio link). See part 1 exploring recurrent neural network models using MusicRNN.

Here we use machine learning to generate new music mimicking Devo‘s synthwave music style. This project is primarily built and based on the tools and frameworks created by the Magenta Project. Magenta is a creative research project started by Google and contributed to by other non-affiliated hobbyists, creatives, artists, and programmers.

This demo uses a combination of methods of manipulating the input midi note sequences such as separating and combining instruments, chunking the song into sections and interpolating the sections with MusicVAE, and recombining interpolations of multiple songs to then feed the interpolated output as input for new music generation using MusicRNN.

(Headphones recommended)