Part Five
Myths and Legends Concerning Digital Equalizers and Processors
Ever since the popular Behringer Feedback Destroyer (a.k.a. BFD) attracted the attention of home theater enthusiasts more than a decade ago, the conventional wisdom has been that its input signal level should be as high as possible. This protocol
de rigueur, put forward by people (I expect) with more knowledge and/or training in electronics theory than real-life experience in pro audio, is supposed to insure full resolution of the equalizer’s A/D converters, which in turn will simultaneously maximize dynamic range and minimize background noise levels.
As we shall see, this presumption is flat-out wrong. The fact is, modern 24-bit digital EQs and processors could care less about the signal levels they receive. They’ll work just as well with a consumer -10 dBV or professional +4 dBu signal. As a matter of fact, they’ll work perfectly fine with signals at the
low range of either gain structure. But you’d never know it from the advice that floats around the various home theater forums:
With this in mind you need 0dBFS from a digital source to be input into the Behringer DCX2496 at 9.75 Vrms (+22 dBu) to get the entire dynamic range available on the medium. If you can't get that high a voltage you need to raise the DCX's input level, or add a line level converter.
The point of the S-Convert [signal booster] is to get the analog level high enough to drive the digital DCX-2496 to near 0dBFS and get the most resolution out of the A/D converters and the highest signal-to-noise ratio. You can't get it back if you lose it there.
Like any digital device, the BFD will offer the best results if you feed it a proper level that uses all the available bits. If the loudest signal only enables half those bits, then the quiet passages will be in the noise.
As you decrease the input signal, the BFD’s noise will rise exponentially. Once the least significant bit and other low order bits are gone as a result of the low input voltage, the A/D converter is no longer a 24 bit device, but a lower bit device, producing higher noise figures.
However, after acquiring a BFD and spending some time with it, it was obvious to me that the (substantial) background noise it displays is
fixed and has no relation to the signal strength it receives. So, the prevailing wisdom didn’t seem to add up.
Where’s the knob?
Looking for answers, I pored over the manuals of numerous professional digital equalizers and other processors, from cheap to ultra-expensive, all with 24-bit resolution (like the BFD), and could not find a single manufacturer recommending maxed-out input levels. Even more surprising: By and large
the manuals scarcely mention input levels at all. Hmmm.
Furthermore, I noticed that it’s hard to find a late-model digital EQ or processor with any sort of provision for input level control. If these products need maximum signal strength to function properly,
why don’t they have an easy-access adjustment on the front panel? Hmmm?
Rane’s DEQ-60 1/3-octave and PEQ-55 parametric equalizers are possibly the only two recent-production digital EQs with both input level controls
and a recommendation in their manuals for setting them. The advice is to set the gain level a substantial
~10 dB below maximum. Hmmm!
The manuals for the Behringer DSP1124 and FBQ2496 Feedback Destroyers are another rare pair that discuss signal levels, even if the hardware itself has no input-level provision. I would submit that the max-levels axiom we typically hear is a
misinterpretation of what the Behringer manuals actually say. Behringer advises that the level meters should be kept out of the red (clipping)
merely for the sake of not overdriving the analog-to-digital (A/D) converters. What is
not stated is that levels are supposed to be pushed as close to clipping as possible as a matter of course, or that this is necessary for best performance. The manuals merely note that signals that are too low are undesirable.
Catching up with the times
So - why the dearth of level controls on these devices or advisements in their manuals? Post a question on the topic of digital level-setting at professional forums like the
ProSoundWeb or
Tape Op Message Board, where you’ll find people who use this kind of equipment for a living and have first-hand knowledge of its progress over the years, and they’ll tell you that the maximum-signal advice is obsolete. It dates back to the early ’90s or before, when 18- and 16-bit A/D converters (or even lower - yikes!) were the norm in professional digital processors. Today it’s accepted in pro audio circles that contemporary 24-bit processors for all practical purposes function identically to their analog counterparts.
By contrast, older low bit-rate processors had relatively high noise floors and did not resolve low-level signals very well, the latter of which is why they required input levels to be pushed as high as possible. And guess what? They included front panel gain knobs to quickly and easily accommodate that necessary function. As an example, the Yamaha DEQ7, the first professional digital equalizer to hit the market in 1987, had 16-bit converters and a dynamic range of only 86 dB. Note the prominent input knob in the picture below, just to the right of the power switch.
Yamaha DEQ7 Digital Equalizer c. 1987
Bit depth, quantization and dynamic range
Unfortunately, the only place where you’ll still find people touting the “maximum levels” advice for modern digital processors seems to be the home audio forums. What apparently hasn’t registered in our community is that deeper bit rates have increased dynamic range to the point of making level concerns moot.
With pulse-code modulation (PCM) sampling, the bit depth is what determines both dynamic range and signal-to-noise ratio. The “rule-of-thumb” relationship between bit depth and dynamic range is, for each 1-bit increase in depth, dynamic range increases by 6.125 dB. So, 24-bit digital audio has a theoretic maximum dynamic range of 147 dB (6.125 x 24 = 147), compared to 96 dB for 16-bit. Another benefit of increasing the bit depth is finer amplitude or voltage “steps” (a.k.a. quantization), which enables low-level audio signals to be more precisely resolved.
There are a number of ill-informed reasons commonly cited for the necessity of peaking out the input signals of modern 24-bit digital processors. One claims that anything lower than the maximum input level means that you’re not using all of the digital bits available, which will mean a loss of resolution. However, it’s generally accepted in the professional recording field that once you’re above 16 bits, optimizing signal levels is no longer an issue. This is because a 16-bit waveform, which has 65,536 amplitude or quantization “steps,” is considered the threshold of what is acceptable for hi-fi sound, because at that rate the human ear can no longer detect quantization errors at low levels. While there may be some debate about that in audiophile circles, a 24-bit waveform has over
250 times more amplitude “steps” - 16,777,216. It should be obvious that a 24-bit system has sufficient resolution to perform well above the 16-bit threshold, even with reduced input voltage.
The case of the missing bits
Another outdated claim is that low input signals will result in the loss of the “least significant bits” (LSB). The theory goes that once the low-order bits are lost, the A/D converter is downgraded to a noisier, lower-bit device. “Digital electronics may have gotten better, but the math has not changed,” an adamant maximum-signal supporter once claimed on a discussion thread. Okay then, let’s take a look at the math.
Least Significant Bit in a 3-Bit System
The first thing that must be realized is this: LSBs can’t possibly contribute to a reduction of an A/D converter’s bit depth. As you can see from the above picture, LSBs are merely a single step - the lowest step - in the voltage “ladder” that is quantization.
Bit depth is what determines quantization (resolution), not the other way around. Therefore a loss of LSBs, even a large number of them, cannot possibly downgrade a converter to a lower bit-depth (lower-resolution) device. A 24-bit system in particular can shed several hundred thousand LSBs from its 16,777,216 quantization steps and still be comfortably above -
miles above - the 65,536 quantization resolution of a 16-bit system.
Indeed, we can go even further. Each bit in a 24-bit system will contain 699,050 quantization steps (i.e. 16,777,216 ÷ 24 = 699,050). Now - compare that figure to the mere 65,536 quantization steps of a 16-bit system. It’s not much of a leap from there to determine that you can abuse the input levels of a 24-bit system to the point that, even if only
one of the original 24 bits “remains,” you will still have more than
ten times the resolution of a 16-bit system!
Looking at it another way, the LSBs in a 24-bit system are a mere
1/16.7 millionth of the full-range signal, which means a single LSB is -147 dBFS. Certainly, most sane people can recognize that degree of error is inaudible.
Any way you slice it, with a high-resolution digital system the least significant bit is exactly that: insignificant. Even a mathematically-challenged guy like me can see that.
Potential System Noise Added by Wholesale Loss of Bits
It should be beyond obvious that high-resolution 24-bit systems have effectively obliterated input signal issues. A 24-bit waveform, even at -14 dBFS, will certainly deliver dynamic range figures and low-signal quantization performance in vast excess of 16-bit. (Remember, 12-14 dB is the approximate difference between consumer and pro gear levels.) In other words, there is no reason to expect that operating a 24-bit digital processor at 14 dB or even 25 dB below full scale is going to reduce a potential 147 dB dynamic range to something akin to a cheap cassette tape (yes I know, I’m dating myself).
dBFS vs. dBVU
As it turns out, -14 dBFS is actually just about the right level for a digital processor. This because the designers of professional digital gear have long tied the calibration of A/D converters to the output of traditional analog mixing consoles. This was necessary because digital processors were on the market long before digital mixing consoles (and it continues to be necessary because analog mixers aren’t going away anytime soon). Analog mixers use a signal reference known as dBVU (volume units), and 0 dBVU is the level where the mixer is at its optimal performance, delivering the least amount of noise with sufficient output to be far above the noise floors of any downstream processors. In contrast, digital gear uses a different signal reference, dBFS (full scale). As we know, a digital processor will hit its maximum at 0 dBFS and if pushed beyond that point will badly distort.
So, how do the dBFS and dBVU scales cross reference? Technically they can’t, because the digital peak scale is not equivalent to the analog RMS scale. But in the hardware-manufacturing industry there is an approximate consensus that a signal measuring 0 dBVU at the input of an A/D converter should come out of the converter somewhere in the -18 dBFS to -9 dBFS range. (Of course, there is some variance, depending on the output calibration of the analog device and the A/D converters used in the digital processor.)
It should be obviously that the manufacturers are smart people who know what they’re doing. If they’re satisfied that calibrating 24-bit A/D converters in the -18 dBFS range is sufficient to provide ample headroom and a vanishing low noise floor for their equipment, there’s no good reason to second-guess them. It should be clear from the lack of advisements in their product manuals that there is no “optimum” level with 24-bit digital audio per se, as long as you are above 16 bits and below 0 dBFS.
Theoretic vs. real-world specs
Another theory for maxing out input signals goes something like this, using the ubiquitous Behringer Feedback Destroyer as an example: Even though the BFD is a 24-bit device and should be capable of delivering a dynamic range of 147 dB, notice its background noise spec of 94 dB unweighted. That's barely 15 bits of resolution, which means that nine of the 24 bits are lost in the background noise. Since 15 bits is all the BFD really has available to define input signal levels, it’s imperative to supply it with the maximum possible signal strength.
In other words, “the bit-depth spec is meaningless, the noise spec is what really matters.” (Sidebar question: If a lower input signal means increased noise levels, then by pure logic shouldn’t the BFD be virtually screaming with background noise with
no input signal?)
In reality, at a certain point - probably around 20 bits - the A/D converter is sufficiently quiet that it no longer has anything to do with the component's overall background noise. In deed, what this theory fails to consider is that
the A/D converters are only one part of the circuitry, which also includes other components that can have an effect on noise.
The fact is that “real world” realities force limitations on both the analog and digital side of specifications. For example, if 24-bit A/D converters were really able to deliver a dynamic range of 147 dB, they would have to be capable of resolving signals as small as one billionth of a volt! Naturally, they can’t do that. (This is why the previous picture graph showed a 16-bit and 24-bit A/D converters having about the same noise floors.) On the analog side, down in this range transistors and resistors produce noise just by having electrons moving around due to heat. So even if A/D converters
could be designed to resolve such low levels, the low-noise requirements of the other circuitry in the component - power supplies, balancing transformers or ICs etc. - would be so stringent that they would either be impossible to build, or too expensive.
What is the result of these real-world limitations? Dig through those manuals I mentioned earlier and you’ll find that the best dynamic-range spec 24-bit processors can muster is between ~105-115 dB,
which is no better than the best analog gear.
If a 115 dB dynamic range spec is the best a digital equalizer can deliver, does that mean for all practical purposes we’re left with a 19-bit piece of equipment? Of course not. If that were the case, no one would have bothered to develop converters beyond 20-bit.
Vapor bits?
We can further illustrate the fallacy of the “lost in the noise” theory with the Behringer DSP1124’s on-board gain-switching between the consumer -10 dBV and professional +4 dBu signal levels.
Anyone who has connected the BFD in their full-range signal chain (i.e. for the main-channel speakers) can tell you that its background noise level jumps dramatically when you switch it from the consumer to professional setting. Are we supposed to believe that the equalizer suddenly loses in the noise, like some kind of vapor, a good number if its digital bits
with the flip of a switch???
Of course not. In reality no bits are lost. The converters are merely
swamped by the residual noise generated by the BFD’s overall poor circuit design. “Swamped” is not the same thing as “lost.” The full bit-depth is there and will certainly deliver superior low signal-level resolution, even if other issues limit noise performance. Fortunately, we typically use the BFD in the subwoofer signal chain, where its noise is largely inaudible.
Headroom is more important
So - now we know why the level-setting topic gets scant mention in the manuals of current 24-bit processors: It’s a non-issue. Even if real-world issues prevent a true 147 dB dynamic range, 24-bit A/D conversion gives us the luxury of more than 50 dB of slack before poor input signal levels start to take a toll on noise levels, the humble BFD notwithstanding. What the “maximum levels” advocates fail to understand is that deeper bit-depth converters extend dynamic range
downward rather than upwards. In other words, 24 bits is not louder than 16 bits, it’s
quieter.
Instead of running the BFD’s input levels all the way out to the top, it makes more sense to dial them back to allow for some headroom. After all, miles of headroom, afforded by excellent resolution of low-level signals, is one of the benefits of 24-bit audio. Headroom is important with the BFD because when its rear-panel switches are in the consumer -10 dBV position, it is possible for the subwoofer output of some home theater receivers to drive the meters into the red.
Since the BFD has no on-board provisions for level setting, the common recommendation has been to drive the input signal to just below clipping, using a DVD with a bass-heavy sound track played at the highest volume setting you’d ever anticipate using. The problem with this advice is that DVDs can be unpredictable. For instance, the
Dark Knight Batman movie has bass levels far more severe than the “reference” DVD I had previously used to set up my system. The point is, you never know what a DVD is going to dish out. To arbitrarily pick something off the shelf as “the standard” and expect that nothing more demanding will ever come down the pikes is foolish.
According to bench tests that have been performed on the BFD,
there is a mere 1/10-volt headroom between the point where the meter’s red LED lights up and clipping begins. So you certainly want to avoid the red LEDs. Leave some headroom and set the meter in say, the -10 to -12 dB range. If it’s good enough for Rane’s $800 digital EQs, it’s good enough for the $100 BFD.