Make text to audio here. One of the next big AI disruptions could happen in the music industry
The past few years have seen an explosion in the application of artificial intelligence to creative fields. A new generation of image and text generators is delivering impressive results. Now, AI has also found applications in music.
Last week, a team of researchers at Google release MusicLM – AI-based music maker that can convert text prompts into audio tracks. It’s another example of the incredible pace of innovation in a few years for innovative artificial intelligence.
With the music industry still adjusting to the disruption caused by the internet and streaming services, there’s a lot of interest in how AI can change the way we create and experience music. music.
Automatically create music
Some AI tools now allow users to automatically create music sequences or audio segments. Many software are free and open source, such as Google’s Magenta toolkit.
Two of the most familiar approaches in AI music creation are: 1. continuation, where the AI continues a sequence of notes or waveform data, and 2. harmonization or accompaniment, where the AI produces something there to complement the input, such as chords to go with a tune.
Similar to AI that generates text and images, musical AI systems can be trained on a number of different datasets. For example, you can extend a Chopin tune using a Bon Jovi-style trained system – as beautifully illustrated in OpenAI’s MuseNet.
Such tools can be a great source of inspiration for artists with “white page syndrome,” even if it is the artist who gives the final push. Creative stimulation is one of the immediate uses of today’s creative AI tools.
But where these tools could one day be even more useful is in expanding musical expertise. Many people can write a melody, but few people know how to master chords to evoke emotions or how to write music in a variety of styles.
While music AI tools have a way to reliably do the work of talented musicians, several companies are developing AI platforms to create music.
Boomy goes the minimalist route: users with no musical experience can create a song with a few clicks and then rearrange it. Aiva takes a similar approach, but allows for greater control; Artists can edit each note created in the custom editor.
There is a downside, however. Machine learning techniques are notoriously hard to control, and AI music creation is a bit of a fluke right now; you can sometimes earn gold using these tools, but you may not know why.
An ongoing challenge for the creators of these AI tools is to enable more precise and deliberate control over what generative algorithms generate.
New ways to control style and sound Music AI tools also allow users to transform a music sequence or audio segment. For example, Google Magenta’s Differential Digital Signal Processing library technology performs tonal conversions.
Timbre is the technical term for the texture of sound – the difference between a car engine and a whistle. Using timbre shifting, the timbre of an audio clip can be changed.
Such tools are a great example of how AI can help musicians compose rich orchestras and achieve entirely new sounds.
In the first AI Song Contest, to be held in 2020, Sydney-based music studio Uncanny Valley (whom I work with), used timbre transfer to let the koalas know sing into the mix. Tonal transmission has engaged in a long history of synthesis techniques that have become instrumental in themselves.
Music Separation Creating and converting music is only one part of the equation. A longstanding problem in audio work is the issue of “source separation”. This means that a recording of a piece of music can be divided into separate instruments.
While it’s not perfect, AI-powered source separation has come a long way. Its use can be a big deal for artists; some of them won’t like it, others may “crack” their works.
Meanwhile, DJs and mashup artists will have unprecedented control over how they mix and remix tracks. Source separation startup Audioshake claims this will provide a new revenue stream for artists allowing easier adaptation of their music, such as for TV and movies.
Artists may have to accept that this Pandora’s box has been opened, as was the case when synthesizers and drum machines first appeared and, in some cases, replaced the need for musicians in one certain number of contexts.
But watch this space, because copyright law provides protection for artists from unauthorized manipulation of their work. This is likely to become another gray area in the music industry, and regulations can have a hard time keeping up.
New Music Experiences The popularity of playlists has revealed how much we enjoy listening to music that has some “functional” convenience, such as for concentration, relaxation, sleep, or exercise.
Startup Endel has made AI-powered functional music its business model, creating infinite streams to help maximize certain states of perception.
Endel’s music can be linked to physiological data such as the listener’s heart rate. Its manifesto relies heavily on mindfulness practice and makes the bold suggestion that we can use “new technology to help our bodies and brains adapt to the new world,” at a rapid rate. busyness and anxiety.
Other startups are also exploring functional music. Aimi is examining how individual electronic music producers can turn their music into interactive and limitless streams.
Aimi’s listener app invites fans to manipulate general system parameters, such as “intensity” or “texture,” or decide when a drop occurs. Listeners immerse themselves in the music rather than passively listening.
It’s hard to say how much of the heavy lifting AI is doing in these applications — potentially very little. Even so, such advancements are guiding companies’ visions of how the music experience might evolve in the future.
The Future of Music The initiatives mentioned above contradict a number of long-standing conventions, laws, and cultural values regarding how we create and share music.
Will copyright laws be tightened to ensure companies that train AI systems on artists’ works compensate those artists? And what is that compensation for? Will the new rules apply to source separation? Will musicians using AI spend less time making music or make more music than ever before? If there’s one thing for sure, it’s change.
As a new generation of musicians grow up immersed in the creative possibilities of AI, they will find new ways to work with these tools.
Such chaos is nothing new in the history of music technology, and neither powerful technologies nor age-old conventions can determine our creative future.