Large cover

Is AI the future of music composition?

03 Mar 2017

Interview kind: text · Reading time: 7 minutes

When someone thinks about the future of our planet, the first things that come to mind are usually: flying cars hovering around our cities, humanoid robots walking their dogs, moon voyages for tourists and, of course, top-of-the-Billboard-charts songs entirely written by computers!
 
Does this sound crazy? Well, it actually isn’t. These images could be part of our daily life sooner than you might expect. In fact, there are different studies, companies and startups that have already created some prototypes of those things.
 
In these days the web has gone crazy about the news and the videos of the first flying drones that are able to transport a human being, intelligent robots that carry around heavy boxes and jump  around the Boston Dynamics company. Elon Musk and his SpaceX just announced their first touristic trip around the moon but, most of all, there are many startups and research labs that are specialized in creating artificially intelligent algorithms that are used to compose music!
 
 
Among the many researches in the AI field, the ones related to the creation of original music are certainly making some huge improvements.
 
 
 
 
The researches in the computer song-writing field have started a long while ago.
 
The first clue about “machines that are able to write original music” can be found back in 1843, when Ada Lovelace wrote that “the Engine (Babbage - first computer ever) might compose elaborate and scientific pieces of music of any degree of complexity or extent”.
 
A century later we had the first confirmation that this theory was actually true, when the composer Lejaren Hiller used a computer to reproduce what’s considered the first computer-generated score in 1950. 
This process hasn’t stopped since then; it’s actually still developing and we’ve now reached an amazing point. 
There are nowadays many companies and startups that are carrying on the research in AI applications in music. One of the most known ones is a UK-based company called Judedeck. In fact, they created an algorithm that can generate original songs which can be used as jingles for commercial ads or as movie soundtracks, completely royalty free.
 
We’re also living in a period of time in which many big names of the tech industries are starting to participate to this topic: Google has developed Magenta, a research project aiming to create computers that are able to compose “compelling and artistic” music.
Others, like IBM, are using AI to help musicians transform their song’s style in an easy and and affordable way by using their Watson Beat software.
 
There are many other startups that are moving their first steps towards the future of music composition. Check out some of them on the selected list made by Techstars, the most famous accelerator that recently started a new batch focused on music.
2 of these 11 selected startups are involved in music AI: Amper Music (an artificially intelligent composer that empowers you to instantly create and customize original music for your content) and Popgun (who’s using deep learning technology to make original pop music).
 
Will the music industry depend on what these computers will be able to write? It’s hard to believe so, but still: it’s not impossible.
 
 
Just listen to what they managed to do at the Sony Computer Science Laboratories in Paris: a whole song entirely composed by an algorithm (but produced and played by real humans). Quite impressive, right?
 
 
 
 
Musicians and composers that make a living out of their art are obviously quite scared about this. The idea that companies or advertising agencies, that usually rely on real musicians to create original music, could decide to use machines that are able to do the same job in half the time and half the cost is not very reassuring for who works in the music industry.
Newton Rex, founder of Jukedeck, recently stated in an interview: “I think it would be misleading to say that AI won’t take any jobs, but I think that, frankly, it’s going to in every industry”.
 
Surely you can’t simply block the evolution and the development of new technologies, that would be extremely wrong.
 
This is the reason why it’s so important to look at AI applied to the music industry as a creative tool that every musician or wannabe can use to express his ideas. Music, just like the tools to create it, will become more accessible to larger audiences and the whole listening experience will radically change, becoming more unique and original.
 
We must think of AI as a tool rather than a shortcut.
A medium that allows you to make even more complex and dynamic music, adaptable to contexts  that are constantly changing, for example video-games. A constantly evolving soundtrack that changes in relation to the player’s movements can be an extremely involving experience.
 
This is exactly what the team of developers of No Man’s Sky thought when they made this video-game, which featured the band 65daysofstatic in the making of the soundtrack. In No Man’s Sky the music is constantly evolving and every scene is represented by an infinite variable of melodic solutions of the same song. Paul Wolinski, a member of the band, says: “Our responsibility was to create these audio states that make sense in their own right, and could work in conjunction with one another. Technically this is a soundtrack that can go on forever”.
 
 
Can human beings and computers coexist? Yes, absolutely. But only if machines are used as an extra tool to improve someone’s work and not like a mere and simple replacement of a person / artist.
 
 
Squared medium 10496086 610689902379761 6827292750898923985 o