Josh Srago writes…
One of the great advantages of current technology is that we get to literally start with a blank slate when it comes to how we want to process our audio signals. We can add as few elements as we would like; sticking to the basics of gate, compressor and EQ, or adding in an auto-mixer, grouping, phase, flange, echo cancellation and a multitude of other options until we hit the maximum DSP possible in the given hardware. The issue that we face as audio professionals is if you don’t fully understand what each of the signal processing elements is doing then how can you understand why that element is necessary? This leads us to the biggest concern involved with an open architecture audio system, what is the optimal order to insert the given signal processors for a specific application?
Let’s keep it simple in this example by just looking at the three most commonly used processors of gate, compressor and EQ so we can see how the signal would be affected depending on the order we place them. If we were to put these units in that order with the gate first, the compressor second and the EQ third, here is what would happen to the signal as it moved through our processor. First, only signal above a certain level would be allowed to pass through the gate helping us to eliminate any excess noise of other sounds in the room or possible buzzes or hums that could deteriorate the signal we are trying to broadcast. Once we’ve passed through that gate we hit the compressor where we take the peaks in the level of the signal and bring them down, closer to the valleys in order to decrease the variance in level so that the signal is much more consistent. Finally, we pass through our EQ, in this case I’m going to go with a seven-band parametric EQ just for the example. One thing that people forget is that, in effect, the EQ is actually a gain stage because it allows us to pick and choose frequencies that we want to turn up or down. So if this signal that we’re looking to adjust happened to be a voice we would have eliminated all noise around the microphone, brought consistent level to the speech or singing pattern, and then turned down the frequencies we didn’t want and possibly turned up some frequencies we did want to emphasise.
So what happens if we change the order of those elements? Let’s move the EQ ahead of the compressor in this case, but keep the gate up front. The gate function would remain the same, but now we’re going to tune in the frequencies we want to emphasise before we hit the compressor which could potentially play havoc with the dynamic range depending on what signal is passing through. For a wide spectrum instrument (think piano), if we were to turn up specific frequencies in order to try and help it stand out in the overall mix, that could end up causing those frequencies to be much more heavily compressed when played because of a possible large level of discrepancy between the wide band of frequencies. Let me emphasise, this doesn’t make it wrong to put the compressor third and EQ second, but you have to be aware of how moving those DSP elements around is going to change the sound and tone of the signal.
In a final example, let’s shift the whole thing around and put the EQ first, the gate second and compressor third. How will that signal react now? With the EQ up front we have the ability to possibly fine tune the frequencies out that could be causing us hums or pops and gain a great deal of control over what frequencies are being allowed to pass through to the next stage. After the EQ we have the gate where, again, we are able to limit it so only the signal we want to pass through to the next processor is allowed to, helping us prevent unwanted noise that could be coming across the system. In this case we could potentially be using both the EQ and gate to help control noise because they will each manipulate the signal in a different way. One will allow us to fine tune the frequencies while the other will put a clamp on the overall unwanted sounds. Once out of the gate we have a highly refined sound wave for frequency and noise that will then pass through the compressor in order to help control the dynamic range and keep it consistent.
Those are just three options, and three potential DSP elements, that will be available to you from the most basic audio processing units, be they home or professional studio to the most sophisticated devices on the market for professional or commercial audio systems. Understanding how each element is going to change the original signal will help you be able to put together the proper order based on the results that you want to achieve. The key thing to keep in the back of your mind is there is no one way or “right” way to order them, because it will be completely dependent on the results that you want to achieve.