TikTok is currently testing out the use of the ‘AI Song’ feature, driven by the BLOOM language model, which represents a strategic move in the social media giant’s foray into AI-generated content. This feature, while not creating music, taps into the power of BLOOM, a colossal 176 billion parameter language model that boasts training on an extensive dataset encompassing 46 natural languages and 13 programming languages. The collaboration of hundreds of researchers globally, forming the BigScience open science project, highlights the scale and collaborative nature of BLOOM’s development.
The ROOTS Corpus, a substantial 1.6TB Composite Multilingual Dataset, serves as the foundation for training BLOOM. This dataset aligns with BigScience’s goal to create an “open-access, massively multilingual LLM,” positioning itself in comparison to OpenAI’s GPT-3 but emphasizing a more comprehensive and well-documented multilingual dataset.
TikTok’s strategic steps in the music domain extend beyond AI-generated lyrics. The introduction of Ripple, a free-to-use music production app, incorporates innovative features like the ‘Melody to Song’ generator and a virtual recording studio. The AI model behind ‘Melody to Song’ enables users to input melodies directly into the app, expanding them into fully-fledged songs across various genres. ByteDance, the parent company, asserts that the AI model was trained on its music, demonstrating a commitment to respecting artist and rightsholder rights.
ByteDance’s wider AI-powered music ambitions become evident through strategic hires of machine learning and AI music creation specialists, signaling a doubling down on its commitment. The acquisition of Jukedeck in 2019 and the subsequent launch of apps like Mawf, Sponge Band, and Ripple further solidified ByteDance’s position in the AI-driven music creation space. These standalone applications collectively contribute to an ecosystem where AI plays a pivotal role in reshaping and enhancing music creation and consumption on TikTok and beyond.