Artificial Intelligence Generated Music

The intersection of AI and music creation is unlocking exciting new possibilities. Recent advancements in large language models (LLMs) like MusicLM, ChatMusician, and M²UGen are enabling text-to-music generation with remarkable fidelity and complexity.
MusicLM by Google Research
This model generates high-fidelity music from text descriptions, such as "a calming violin melody." It leverages a sequence-to-sequence modeling approach and can transform simple inputs like a hummed melody into fully developed compositions. MusicLM excels in quality and consistency over several minutes.
ChatMusician by DigiAlps
ChatMusician is an open-source LLM designed to generate full-length, structured music. It uses ABC notation, which compresses musical data efficiently and provides precision in rhythmic patterns. The model outperforms GPT-3.5 and GPT-4 in creating coherent music across various genres.
M²UGen by Hugging Face
M²UGen is a multi-modal framework for music generation that integrates LLaMA 2 and other pre-trained models. It bridges various modalities, including images and video, with music generation through MusicGen and AudioLDM 2. It aims to enhance creative outputs by combining different media inputs into musical compositions.