Part 1: Machine Learning SynthWave Music Generator

Directed by Professor Scott Easley of USC (portfolio link), I started dabbling in Google’s artistically focused machine learning framework Magenta project. Magenta is built on TensorFlow. Here is the first demo of the abilities of MusicRNN a Recurrent Neural Network model for generating musical notes.

  1. Click the top “Play” button to start listening to the midi version of Nightcall by Kavinsky 
    • Click “Stop” to stop playing the music. (amazing UI design I know!)
  2. Click the “Generate New Output” button to generate a new midi polyphonic song based on the Nightcall song midi input.
    • Generating output can take a few seconds.
    • You may need to generate new outputs multiple times before getting something interesting to listen to.
  3. After a new midi track is generated (as will be seen in the lower box), click the lower “Play” button (next to the “Generate New Output” button) to play the generated music.
  4. Go nuts with it!

Headphones recommended

Check out 👉
Part 2: exploring Variational Auto-Encoders