[ad_1]
Artificial intelligence has slowly crept into the music industry, creating viral songs, bringing back our favorite singers’ voices from the dead, and even qualifying for a Grammy (sort of). Meta released new AI tools that will make using AI to generate music even easier.
Also: The best AI chatbots
On Tuesday, Meta revealed AudioCraft, a set of generative AI models that can create “high-quality and realistic” music from text, according to Meta.
Audiocraft consists of three of Meta’s generative AI models: MusicGen, AudioGen, and EnCodec. Both MusicGen and AudioGen generate sound from text, with one generating music and the latter generating specific audio and sound effects.
You can visit MusicGen on HuggingFace and play with the demo. For the prompt you can describe any type of music you’d like to hear from any era. For example, Meta shares the example, “An 80s driving pop song with heavy drums and synth pads in the background”.
EnCodec is an audio codec comprised of neural networks that compress audio and reconstruct the input signal. As part of the announcement, Meta released the most improved version of Encodec that allows for higher-quality music generations with fewer artifacts, according to the release.
Also: How to achieve hyper-personalization using generative AI platforms
Meta also released the pre-trained AudioGen models, which give users access to generate environmental sounds and sound effects such as a dog barking or floor creaking.
Lastly, Meta shared the weights and code for all three open-source models so researchers and practitioners can leverage it to train other models.
Meta shares in the release that AudioCraft has the potential to become a new type of standard instrument like synthesizers once became.
Also: 4 ways to detect generative AI hype from reality
“With even more controls, we think MusicGen can turn into a new type of instrument — just like synthesizers when they first appeared,” said Meta.
This isn’t the first generative AI model of this nature. Google released MusicLM in January, its own model that can transform text into music. A recent research paper revealed that Google is also using AI to reconstruct music from human brain activity.
[ad_2]
Source link