Using prompts like “soulful music for a dinner party” or “movie scene in a desert with percussion,” users can generate music at the click of a button. According to the company’s announcement, it sees the technology as a “new type of instrument — just like synthesizers when they first appeared.”
MusicGen — the model from the AudioCraft suite that produces music — was trained on 20,000 hours of Meta-owned and specifically licensed music. The announcement is unclear about whether EnCodec was trained on any copyrighted material or if it follows the same guidelines as MusicGen. Meta did not immediately returnTraining is one of the most contentious areas of the nascent AI industry.
MusicGen, AudioGen and EnCodec will all be available as open-source models. This will allow researchers and practitioners access so that they can train their own models with their own datasets, advancing the AudioCraft tools even further than Meta’s initial launch and addressing the company’s concerns of bias, including its proclivity for Western-style music — the biggest portion of its training set.
“Music is arguably the most challenging type of audio to generate as it’s composed of local and long-range patterns, from a suite of notes to a global musical structure with multiple instruments,” said Meta in a blog post, noting that its family of models is “capable of producing high quality audio” with consistency and ease of use.
Source: News Formal (newsformal.com)
United States Latest News, United States Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: Forbes - 🏆 394. / 53 Read more »
Source: Reuters - 🏆 2. / 97 Read more »
Source: TheBlock__ - 🏆 464. / 53 Read more »
Source: verge - 🏆 94. / 67 Read more »
Source: billboard - 🏆 112. / 63 Read more »
Source: THR - 🏆 411. / 53 Read more »