Machine learning music generation using Magenta.js musicRNN
Directed by Professor Scott Easley of USC (portfolio link), I started dabbling in Google’s artistically focused machine learning framework Magenta project. Magenta is built on TensorFlow. Here is the first demo of the abilities of MusicRNN a Recurrent Neural Network model for generating musical notes.
- Click the top “Play” button to start listening to the midi version of Nightcall by Kavinsky
- Click “Stop” to stop playing the music. (amazing UI design I know!)
- Click the “Generate New Output” button to generate a new midi polyphonic song based on the Nightcall song midi input.
- Generating output can take a few seconds.
- You may need to generate new outputs multiple times before getting something interesting to listen to.
- After a new midi track is generated (as will be seen in the lower box), click the lower “Play” button (next to the “Generate New Output” button) to play the generated music.
- Go nuts with it!