re: Gain Structure for Home Theater: Getting the Most from Pro Audio Equipment in Your System
Let me get a little long winded here.
The point you seem to be trying to make is that most AVRs should be able to run a proamp to maximum power with no problems. This is true for most AVRs, especially for AVRs in the middle up range. The problem over the years has been one group that thinks there is never a situation where an AVR can't drive a pro amp properly, another group that has decided no AVR can drive a pro amp properly, and the group of newbies who can't tell who to believe. Do they trust one of these two, or the other guy saying try it out, if it doesn't work add a converter.
The issues with Pro signal processing gear added in is another nut altogether.
The discussion on the home audio forums, as least what I’ve seen in the past ten years, has been that pro gear requires a +4 dBU signal, and if you can’t get that from your home theater pre amp/AVR you’re going to have noise, reduced dynamic range, etc.
Unfortunately, it is true. That doesn't mean it can't work fine that way, but to say it is anything but true is just wrong. Of course you have to understand what +4 dBu is.
Most Pro gear is set up to operate in it's optimal range at 1.228volts balanced (
+4 dBu or 0 Vu.) This is not the ceiling though. It is not like 0dBFS on a DVD. This is the nominal signal. It would be like -20dB in the film world (
-30dB is the HT equivalent.)
Now lets say the balanced device has an input ceiling of +22dBu (very common,) and your unbalanced out can only do 5v max clean (
a common LFE out number.) +22dBu is 10v, so you are 6dB closer to the noise floor than you should be, and have lost 1bit of resolution. If this was a signal processor, and you connected it to a consumer unbalanced amp you just lost another 6dB do to the conversion.
This brings up another thing that is mis-stated all the time. You automatically lose 6dB of headroom and gain going from balanced to unbalanced (
unless you have a special cross-coupled output, then you just lose headroom and not gain :dumbcrazy

not the other way around.
You’re “mixing and matching” the digital dBFS scale with the analog Vrms scale. 0 dBFS is the highest possible digital signal; there is no such thing as “+0dBFS.” If a component somehow adds some boost to the signal in the analog domain (i.e. after the digital-to-analog conversion), that’s of no relevance. The measurable-voltage signal at the AVR’s main pre outputs will reflect that, and any voltage measurement will still be valid.
I was just trying to show that 0dBFS is not always something that scales properly with the rest of the signal. The voltage jump from -10dBFS to 0dBFS could be 10dB, 16dB, or 5dB because of the 0dBFS+ (
I was improperly using +0dBFS before) conditions present in all digital to analogue systems. 0dBFS is absolute on the disk, but ones it leaves all bets are off. This is part of the reason why digital test signals are sent a -20dBFS to -10dBFS.
For a clean reference signal (
assuming a fully digital connection up to the AVR,) a -6dBFS peak sine should avoid all the Nyquist headache for a single channel test. You would just bump the master volume +6dB to compensate, as this is done after the digital section. For an LFE test you would want an identical -6dBFS wave on the LFE+L+R channels, and use bass management to send everything to the sub out. This will just fill the 5dB digital headroom in the DD processor, and allow a full strength signal out of the preout with the master volume at +6.
This is without getting into distortion levels. Such as, my AVR at 0mvl(
corrected) will begin to audibly distort with a simple LFE channel signal of 0dBFS(
corrected) without redirected bass with a channel level gain of +4dB of +12dB possible. You can test this with software that has loopback capability, or adjust the power amps sensitivity so that it is only 80dB in-room instead of 120 something. I also found that around +8dB I started to get premature clipping from my amp. Unfortunately, you can not register distortion/clipping with a DMM. It requires measurement.
There is a long and established history on the home audio forums that trying to apply a pro-audio-styled gain structure protocol has caused a lot of confusion, if not out-and-out problems. As an example I recall at least one thread at AVS I came across while researching this, of a guy who had added a Behringer DCX2496 to his system. Following the pro audio protocol, he’d managed to get his levels hot enough to get a good reading on the DCX input meters (forget how he accomplished that). The result: An added 6 dB of noise (by his account), and problems clipping the inputs of his home audio amplifiers. Then there was the case I linked at the end of the article.
That is not how a professional would do it though. :bigsmile: There should be zero clipping. Clipping takes precedence over DNR. He bumped up his levels to get the most DNR out of the signal processor, only to overdrive his -10 dBV amp inputs. He should have accepted the DNR loss through the DCX, or added another line converter after it to reverse the one before.
I expect people have the good sense to know if they are getting distortion and can easily tweak things to get the desired results.
Those people wouldn't need to read this to know how to set things up.

The point of structured gain is to never get to the distortion/clipping level, but also maximize DNR of the total system. You can miss the distortion/clipping level, and still have lowered your DNR.