Scroll Top

Googles newest AI can create all kinds of music from text prompts

WHY THIS MATTERS IN BRIEF

In short if you can talk or text you can now use AI to generate everything from content to products as creativity gets democratised.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential Universityconnect, watch a keynote, read our codexes, or browse my blog.

As we continue to see Generative Artificial Intelligence (AI) produce everything from books and blogs, to code, drugs, imagery, and videos, now Google researchers have unveiled that they’ve made an AI that can generate minutes-long musical pieces from text prompts, and can even transform a whistled or hummed melody into other instruments, similar to how systems like DALL-E generate images from written prompts. The model is called MusicLM, and based on Google’s earlier research that I talked about many years ago called Magenta, and while you can’t play around with it for yourself, the company has uploaded a bunch of samples that it produced using the model.

 

See also
The “Godfather” of Deep Learning unveils what’s next in AI

 

The examples are impressive. There are 30-second snippets of what sound like actual songs created from paragraph-long descriptions that prescribe a genre, vibe, and even specific instruments, as well as five-minute-long pieces generated from one or two words like “melodic techno.” Perhaps my favorite is a demo of “story mode,” where the model is basically given a script to morph between prompts. For example, this prompt:

 

Electronic song played in a videogame (0:00-0:15)

Meditation song played next to a river (0:15-0:30)

Fire (0:30-0:45)

Fireworks (0:45-0:60)

 

Resulted in the audio you can listen to here.

 

See also
China races to lay down new laws to deal with Generative AI

 

It may not be for everyone, but I could totally see this being composed by a human, I also listened to it on loop dozens of times while writing this article. Also featured on the demo site are examples of what the model produces when asked to generate 10-second clips of instruments like the cello or maracas, eight-second clips of a certain genre, music that would fit a prison escape, and even what a beginner piano player would sound like versus an advanced one. It also includes interpretations of phrases like “futuristic club” and “accordion death metal.”

MusicLM can even simulate human vocals, and while it seems to get the tone and overall sound of voices right, there’s a quality to them that’s definitely off. The best way I can describe it is that they sound grainy or staticky. That quality isn’t as clear in the example above, but I think this one illustrates it pretty well.

That, by the way, is the result of asking it to make music that would play at a gym. You may also have noticed that the lyrics are nonsense, but in a way that you may not necessarily catch if you’re not paying attention — kind of like if you were listening to someone singing in Simlish or that one song that’s meant to sound like English but isn’t.

 

See also
An AI learned to identify PTSD symptoms better than human experts

 

Google released a research paper explaining it in detail. AI-generated music has a long history dating back decades; there are systems that have been credited with composing pop songs, copying Bach better than a human could in the 90s, and accompanying live performances. One recent version uses AI image generation engine Stable Diffusion to turn text prompts into spectrograms that are then turned into music. The paper says that MusicLM can outperform other systems in terms of its “quality and adherence to the caption,” as well as the fact that it can take in audio and copy the melody.

That last part is perhaps one of the coolest demos the researchers put out. The site lets you play the input audio, where someone hums or whistles a tune, then lets you hear how the model reproduces it as an electronic synth lead, string quartet, guitar solo, etc. From the examples I listened to, it manages the task very well.

Like with other forays into this type of AI, Google is being significantly more cautious with MusicLM than some of its peers may be with similar tech.

 

See also
Google's new AI can build AI's that eclipse those created by human experts

 

“We have no plans to release models at this point,” concludes the paper, citing risks of “potential misappropriation of creative content” and potential cultural appropriation or misrepresentation.

It’s always possible the tech could show up in one of Google’s fun musical experiments at some point, but for now, the only people who will be able to make use of the research are other people building musical AI systems. Google says it’s publicly releasing a dataset with around 5,500 music-text pairs, which could help when training and evaluating other musical AIs.

Related Posts

Leave a comment

FREE! 2024 TRENDS AND EMERGING TECHNOLOGY CODEXES
+

Awesome! You're now subscribed.

Pin It on Pinterest

Share This