This repository uses state of the art deep learning architectures and techniques to maximise output quality
We use a GPT-like Music Transformer to take in midi (represented as language tokens), to auto-regressively eventually output a whole piece. With just 100 million parameters, a few hundred tokens and a desktop GPU, I was able to make emotional, expressive and coherent mini piano pieces. Here is one of them: https://github.com/user-attachments/files/22888924/generation.mp3
Please first make sure the 5 python files in this repository are all in the same folder.
You must first download a dataset of (solo-piano) midi files, and then run the 'datacreater.py' script (make sure all the hyperparameters at the top are what you want).
This will give you two binary files, which you must then run the 'train.py' script, to train your model and will output the parameters for the trained model.
Then you can use the 'sample.py' script, copy the text from the console and then set the sample_text variable to it in the 'tomidi.py' script. Once you have done all this, you will get your own completely ai-generated mini piano piece!