Deciphering the Jargon Behind Modern Music Production

Deciphering the Jargon Behind Modern Music Production

The technology used in music production began to change dramatically during the 1980s when Atari decided to include a set of Musical Instrument Digital Interface (MIDI) sockets directly on the motherboard of their new Atari ST home computer. The Atari ST’s primary competitor, the Commodore Amiga, did not have this same functionality built-in, but it could easily be added to the computer at a very low cost via an optional add-on MIDI interface device. Despite the low cost and simplicity of adding MIDI functionality to other machines such as the Amiga, Atari’s choice to include MIDI functionality as part of the default setup of the Atari ST undoubtedly contributed enormously to that machine becoming the computer of choice for many recording studios of the time. The first version of Steinberg’s Cubase sequencing software was written for the Atari ST and was revolutionary at the time of its release in 1989.

The Basics

The first versions of software such as Cubase were known as sequencers – the term Digital Audio Workstation(DAW) did not come along until much later, around the time that computers would start to become powerful enough to play back SFX and samples, create new sounds, and apply effects without the need for additional MIDI equipment. The term VST was introduced in 1996, standing for Virtual Studio Technology, and was originally used only as a means for creating virtual effects which usually replicated the processes of standard studio equipment such as reverbs, echos, compressors, and equalisers. Later, the VST standard was expanded to include VST Instruments or VSTIs which were software representations of the hardware sequencers commonly used in conjunction with the computers and sequencing software of the day.

The Audio Signal

When the computer was merely being used to instruct synthesisers and effects boxes as to what they should do, it didn’t usually matter what audio capabilities the host computer had of its own. All of this changed once the computer became the host for virtual effects and instruments, and especially once the computer started to become responsible for playing back audio samples and recording the master tapes, in place of a dedicated Digital Audio Tape(DAT) machine. In the early days, parameters such as sample rate, sample depth, and bit rate were extremely important – computers could not handle the huge amounts of information that they can today, so recording and playing back the main vocals from your song via your computer was prohibitively resource-intensive. Today, keeping all of your samples at a minimum of CD-quality (44.1kHz at 16-Bit) is no problem at all, and some people even choose to keep their original recordings in higher quality formats such as 192kHz and/or 32-Bit. You should always try and avoid lossy compression wherever possible in your signal chain – for example, do not keep your samples in MP3 format and do not export your master files into MP3 format unless they are intended solely for distribution online.

Working “Inside the Box”

The term “VST” has evolved into a catch-all description for any kind of simulated effect or synthesising software, something which was only made possible by Steinberg’s generous decision to open up the standard so that it could be included in other music software packages. Once computers were running digital effects and instruments, it became necessary for other elements of the recording studio to be integrated inside the sequencing software as well. When hardware boxes were being used to create all of the functions necessary for music production, some would argue that certain things were much more simple than they became in the era of the Digital Audio Workstation. One example of this is the two common types of effects used in DAW software: Firstly, there are Insert Effects, which need to be inserted into the signal chain before the sound information is passed along to the effects plug-in, as the sound which comes out of the other side is an entirely new piece of audio data created by manipulating the entirety of the “inserted” sound. In a hardware setup, this is straightforward, as you simply plug each piece of equipment into one another in the order that you wish the effects to be applied. Example usage cases for insert effects are compressors and EQs where the effect applied to even the very first few milliseconds of the sound can be of vital importance. Secondly, you have Send Effects, which were usually included using audio ports on the back of hardware mixing desks. One or more channels could be sent through these ports, with the new signal being returned to the desk and usually being made available on a separate channel, allowing you to choose how much of the “dry” signal and how much of the affected signal was audible. Sends are perfect for effects such as delays or echos, where you usually want to retain the original sound in addition to new sounds created by applying an effect to the original.

Hardware Terms in the Software World

Modern sequencing packages such as Cubase and Logic are designed to emulate the environment of a traditional recording studio, so it has been essential for the next generation of music producers to learn the meaning behind a multitude of terminology that they might not otherwise have come across if music production software was designed from the ground-up for the computer environment. Some modern sequencing packages such as Ableton Live have tried to embrace this idea by doing away with as much legacy terminology and structuring as possible, but many elements of the traditional hardware recording studio still feature in these programs because there simply aren’t any alternative mechanisms for recreating the functionality that they offered. One such example of this is envelopes – a common method of controlling how a sound is created, or how an effect is applied to a sound. The simplest type of envelope is known as an ADSR envelope, with each letter standing for a different step in the process of affecting the sound: The attack phase is the period from when you first play a note on an instrument, such as a keyboard, and the sound reaches its full, normal volume. The decay phase comes next, and takes effect directly after the sound has reached its initial maximum volume – you would still have your finger on the key at this point, but the sound has not yet reached its final level – that comes in the third step. The sustain portion of the envelope is the sound that will play for the duration that you hold the key down – after the sound has hit its first high point and faded back down to reach its regular level. The final part of the envelope, known as the release phase, controls what happens after you let go of the key; does the sound disappear immediately? Or does it fade out slowly, fading in volume over some time?

Summing Up

The terms discussed in this article are only the beginning of any budding music producer’s journey on the way to becoming an experienced and professional computer-based musician in the 21 st century. Deciding which hardware and software you wish to learn and understanding the terminology behind these things is enough to get you started, but be prepared for a long haul if you wish to learn the full potential of this craft – creative pursuits have never been easy, and some people believe that technology has only made the tasks harder. At the same time, the possibilities that modern Digital Audio Workstation software opens up are almost endless – we only hope we have inspired you to learn more!