Automation in music: how far can it go?

11 Oct, 2019 | 6 Minute Read

The Digital Audio Workstation

The term ‘automation’ can be defined as the technology by which a procedure is carried out with minimal human assistance. It is usually performed by a robot or a computer. Mozart might be mesmerized by what’s going on with automation in the music world today.  Robots may not yet be playing violins, but there is a range of devices which have embedded automation in modern music production and creation.

Taking the Digital Audio Workstation (DAW), as a starter. This is a piece of music production software, or hardware used to record, control and edit digital audio files. The main element which is controlled is volume, but at a more complex level, DAWs are used to mix multiple tracks of audio recordings and to add audio effects. All these procedures are carried out in the post-performance editing process.

Why do musicians use automated processes?

The 1970s gets blamed for a lot. But it can take the credit for the rise of automation in music. The need for automation came about when studios moved from using 8-track tape machines to multiple, synchronized 24-track recorders. Mixing tracks could be a long and difficult process and require up to four people, and it was almost impossible to reproduce the results.

Technological leaders in the industry, such as Solid State Logic, developed mixing desks which enabled one engineer to create a complex mix, and, most importantly, save the parameters, so the mix could be easily replicated. However, in the 1970s, the computers required to power these desks were a rarity. Producers of bands such as Queen managed to get hold of this equipment, and ‘A Night At The Opera’ was one of the first albums to benefit from this state-of-the-art technology.

How is mix automation making the job so much easier?  

In the 1960s automation hadn’t yet arrived on the music scene. Imagine if the Beach Boys had performed in the studio, and individual recordings had been made of each performer and each musical section – intro, verse, refrain, bridge, outro. If, in the post-performance phase, the producer listened back and realised that although the control levels were fine for the verse, in the chorus Brian Wilson and Mike Love’s voices were drowning out everyone else’s.

What did the producer do?  He manually moved Wilson and Love’s vocals ‘volume fader’ down when the song entered the chorus, and back up at the end of the chorus; sticking tape on the volume fader control dial so he could remember where the slider should be moved to. Each time he played the song back, he had to remember to push the fader down, and then back up.

The more adjustments that had to be made, the more complex and unwieldy the production process became. More people had to be involved to operate the desk. Digital mix automation simplified this process, enabling sound files to be adjusted by one person using a screen and a console, and the final results stored in a memory.

Mixing with a DAW doesn’t just involve simple volume controls. There is EQ control, a volume control for specific frequencies, usually used to filter in or out low or high frequencies, making a sound feel nearer, or farther away. Saturation is much the same concept in music production as it is in digital photography. Increasing the saturation of a digital signal makes it sound warm and rich. Reverberation (continuing the sound after the sound is produced), and Delay (producing an echo) are other effects in the DAW’s repertoire, as is Panning, where the sound is moved on the stereo spectrum, giving the feeling that it is moving from one speaker to another.

Sound creation and electronic music

Automation in music goes further than just mixing. It can involve sound creation. This is moving into the realm of electronic music, which has been around since the early 1900s, but really took off with the invention of the Moog Synthesizer by Robert A Moog in the early 1960s. 

Moog built a machine containing synthesizer circuits— filters, amplifiers and oscillators that could be connected together in different configurations to produce sounds. In the 1970s, the synthesizer, together with drum machines and turntables had a major influence on popular music as new genres of music, such as krautrock, emerged. Through the 1980s, the influence of electronic music increased, and the incursion of digital technology resulted in the development of the Musical Instrument Digital Interface (MIDI). The MIDI enabled the musician to connect a range of devices together, e.g. computer, keyboard, synthesizer, and drum machine, plug them in to a DAW, and electronically produce music.

Electronic music continues to be centre stage in the pop music scene, its popularity boosted by the arrival of affordable music technology. 

How far can automation in music go?

Historically, automation in the music field has had one thing in common with automation in most other areas: human intervention. But is it possible to go a step further? Can music be created exclusive of human input? Some will argue that, with the advent of Artificial Intelligence, we are close to that. 

David Cope (b.1941), Emeritus professor of Music at the University of California, is held up to be the father of artificial music composition. His software system, EMI (Experiments in Music Intelligence) is considered to be a ground-breaker in the field. Using EMI, he released a number of records with artificially composed songs. It all stemmed from the fact that he suffered from composer’s block, early in 1981. To overcome this block, he fell back on his knowledge of computer programming, and created a programme that could compose original tracks.

Cope established three main principles which continue to this day to form the basis for artificial music composition.

1) Deconstruction: The process by which music compositions are analysed and separated into parts.

2) Identification of signatures: The process of identifying common features that characterize the style of a type of music, or of a composer.

3) Compatibility:
The recombination of common features, patterns, styles, to create new, original music.

Among the many pieces of artificial music David Cope produced using these principles (by, he claimed, simply pressing a button) are pieces in the style of Rachmaninoff, and, yes, Mozart. Wolfgang Amadeus’s amazement at the scale of automation in music might be complete, if he were able to agree with the general consensus that Cope’s work was not just copying his own, but had taken his original style, and created something recognizably Mozart, but entirely new.

More recently, but following on from Cope’s principles, Sony CSL’s Flowmachines software has been used to scan sheet music, assimilate data, and produce seemingly original music in the style of famous bands and composers. The Beatle-like song ‘Daddy’s Car’, has received a mixed reception, particularly regarding the lyrics, but the style is unmistakable.

Up to now, automation in music has always had some kind of human input. Will we get to the stage where AI is capable of producing music entirely from scratch? Music which would appeal to humans? Can we exclude ourselves completely from the creative process? Do we really want to? Only the future will tell.

Browse all insights blog posts 
Maggiori Informazioni
This website uses cookies to ensure you get the best experience on our website. Learn More