Joined
·
9,371 Posts
Please use this thread for any comments or discussion about my article Gain Structure for Home Theater.
Regards,
Wayne
Regards,
Wayne
I missed the documentation. Where are the distortion, noise floor, and headroom graphs, and information to back this method up?I apologize for the length of this, but I figure if you’re going to scorch sacred cows you’d better have the documentation to back it up.
You might want to add to this what the impact is of using an AVR with a built-in equalization system such as Audyssey. I've thought about this a little in terms of my Denon, even though I use external amplification only for the powered sub. It strikes me that the effects come from two areas that affect the point at which clipping might occur: the AVR may have different level settings for each speaker, and the static equalization may boost certain frequencies up to 12dB.
Yeah, good point about Audyssey. Didn’t even consider it, because I don’t use it. But I don’t really see any issues with it WRT my home-grown gain-setting process. Maximum clean pre amp output is the maximum usable output, whether or not Audyssey’s EQ is there or not. I assume that the AVR manufacturers that incorporate Audyssey into their receivers have taken care of all that. I did note that if people were concerned about pre amp headroom when setting the amp gains, they can use a lower AVR volume setting than max. Seems to me that should be sufficient. :TIn Part 9, where you discuss how much headroom to allow, it would be nice to allow enough headroom for the maximum boost that the equalization system might introduce, e.g., 12dB for Audyssey. This would ensure that no clipping would occur from the voltage peak. The speaker trim might come into play here, too. Obviously if one has verified that clipping does not occur with the trims set to 0dB, and if the AVRs calibration sets a higher trim to achieve reference levels, clipping might now appear.
Right. And I did recommend setting speaker levels to their highest setting in the 3rd paragraph under the “How to determine your AVR’s output voltage: “My AVR has all speaker-level settings referenced to the main left and right channels, which are fixed and cannot be adjusted in the menu. If your AVR allows for adjustments for the front left and right channels, they should be set to maximum for this exercise, as should the center channel and subwoofer if you intend to measure those too.”In Part 7, where you discuss measuring the maximum voltage from the AVR, the setting of the speaker trim might matter, especially if it was very negative. For purposes of the experiment, it might be best to set it to at least 0dB, or perhaps to its maximum positive value.
I expect this would only be an issue if someone was mixing high- and low-efficiency speakers in their system. It would be the efficient speakers that would be dialed back in the AVR. But since efficient speakers don’t need much power, it might not matter. If it did – dialing the AVR speaker levels back would mean the amp gains could be ratcheted up to compensate. Alternately, the speaker adjustments could just be performed via the amp gains instead of the AVR menu.Related to this, adjusting the amplifier's gain to the highest level that avoids clipping at maximum signal affects how the AVRs volume setting is calibrated for reference levels. At this highest possible amplifier gain, you might have a situation where the AVR cannot set the speaker trim low enough to correctly calibrate the volume level.
Relevant graphs were presented in Parts Three and Five. Perhaps not everything you're looking for was included, but it's far more graphs and documentation than you'll find in the typical PA system gain structure article.I missed the documentation. Where are the distortion, noise floor, and headroom graphs, and information to back this method up?
Well, it’s the maximum a DVD disc itself will put out, as it’s a digital media and will not support a higher level without distortion. I assume the DVD manufacturers are aware of that and will make sure the audio on their discs does not go beyond 0 dBFS and distort. 0dBFS is also the maximum a DVD player will output, since a digital output it’s passing the signal straight along to the AVR. I assume the same can be said about things like cable TV and sat receivers as well.It is also possible/common to get higher than what should be 0dBFS out of, well pretty much everything.
And that’s perfectly fine. This is not pro audio. This is home theater. The problem people have been having is thinking the “pro audio mindset” somehow became relevant to them when they added a piece of professional equipment to their home theater system. As thoroughly discussed in Part 2, a pro audio-styled gain structure protocol does not necessarily cross-reference to home audio.There are a couple of other explanations that seem a bit off as well, form a pro audio mindset.
The “set speakers to max” thing is only for the purposes of setting the amp gains. After that you’re supposed to adjust the relative speaker levels as it’s normally accomplished, with the AVR’s rotating pink noise sound, a calibration disc, etc. Headroom is restored when you turn everything back down and use the system as normal. You weren’t running your system wide open before gain-structuring, and you won’t be afterwards.This "Make sure all speaker-level settings in the AVR's menu are set to max," is just bad advice. It is the flaw behind the whole thing. An AVR like any other device has a set operational range, and headroom built in.
I still do not follow then. You say that which is true as structures gain has nothing to do with maxed-out signals, and later add "Relevant graphs were presented in Part Three. The “sacred cow” I was referring to was the popular maxed-out signals canard. I’ll go back and change the text to make better sure the point is made.
" this is not so accurate. The idea is to pass the cleanest signal possible along the signal chain. Live sound extends this to the amps, but that is to protect fidelity, and equipment as well. It is not for the sake of getting maximum output.Gain structure is merely an exercise to insure that the pro amps are getting enough signal to drive them to maximum output.
0dBFS on a disk has a fickle relationship with what you end up with out of the AVR analogue outputs. +0dBFS signals are very common, and are produced by the DAC conversion, bass management, and the master volume control alone, and as a group. There are also oddities with some devices and 0dBFS signals as well where 0dBFS may not be as loud as signals just a few dB lower in intensity. Most every commercial CD released in the last half a dozen years can produce +0dBFS. It's even worse with most mp3s. No one is ripping DVDs in raw format, but I see no reason for it not to be the similar.OdBFS is also the maximum a DVD player will output, since a digital output it’s passing the signal straight along to the AVR. I assume the same can be said about things like cable TV and sat receivers as well.
If you put a piece of differently referenced gear into your signal chain, it is now relevant. Structure, protocol, terms, have nothing to do with it. It is basic electronics. Home gear runs on the exact same base principles pro gear does. Once you add a device that does not conform to the same default "home" reference, it pays to think about the things home audio takes for granted as a given.And that’s perfectly fine. This is not pro audio. This is home theater. The problem people have been having is thinking the “pro audio mindset” somehow became relevant to them when they added a piece of professional equipment to their home theater system. As thoroughly discussed in Part 2, a pro audio-styled gain structure protocol does not necessarily cross-reference to home audio.
Starting at the end... If you look at part 4 in the Setting Power Amplifiers section, you see that you need to identify the max clean signal from everything up until that point first. SOP. Just turning up the levels to max isn't the way to go. Most AVR outputs will be distorting and clipping before the max anyway, so it's kind of arbitrary. You will clip the amp before the input sensitivity voltage is reached. You are tying to get around clipping the amp by raising the AVR to levels it should never normally encounter, but doesn't this still leave the amp "with the gains being set higher than they need to be?" This leaves the amount of gain the AVR can add to a channel out of the mix. With 12dB you should be alright, but what about models with only 6dB?The “set speakers to max” thing is only for the purposes of setting the amp gains. After that you’re supposed to adjust the relative speaker levels as it’s normally accomplished, with the AVR’s rotating pink noise sound, a calibration disc, etc. Headroom is restored when you turn everything back down and use the system as normal. You weren’t running your system wide open before gain-structuring, and you won’t be afterwards.
But it’s imperative to have the maximum signal on tap when setting the amp gains. Otherwise you will end up with the gains being set higher than they need to be, which can lead to more noise from the amp than you’d get with lower gain settings.
Besides, my “max level” procedure for setting amp gains is virtually identical to what Rane outlines in their gain structure article, which I linked in Part 2. Have you read it?
The discussion on the home audio forums, as least what I’ve seen in the past ten years, has been that pro gear requires a +4 dBU signal, and if you can’t get that from your home theater pre amp/AVR you’re going to have noise, reduced dynamic range, etc. In most cases it’s possible to drive a pro amp with a consumer front end, especially if one is chosen with an ample sensitivity rating.I still do not follow then. You say that which is true as structures gain has nothing to do with maxed-out signals, and later add "
Gain structure is merely an exercise to insure that the pro amps are getting enough signal to drive them to maximum output.
You’re “mixing and matching” the digital dBFS scale with the analog Vrms scale. 0 dBFS is the highest possible digital signal; there is no such thing as “+0dBFS.” If a component somehow adds some boost to the signal in the analog domain (i.e. after the digital-to-analog conversion), that’s of no relevance. The measurable-voltage signal at the AVR’s main pre outputs will reflect that, and any voltage measurement will still be valid.0dBFS on a disk has a fickle relationship with what you end up with out of the AVR analogue outputs. +0dBFS signals are very common, and are produced by the DAC conversion, bass management, and the master volume control alone, and as a group. There are also oddities with some devices and 0dBFS signals as well where 0dBFS may not be as loud as signals just a few dB lower in intensity. Most every commercial CD released in the last half a dozen years can produce 0dBFS.
There is a long and established history on the home audio forums that trying to apply a pro-audio-styled gain structure protocol has caused a lot of confusion, if not out-and-out problems, such as we see here in this post from another Forum:If you put a piece of differently referenced gear into your signal chain, it is now relevant. Structure, protocol, terms, have nothing to do with it. It is basic electronics. Home gear runs on the exact same base principles pro gear does. Once you add a device that does not conform to the same default "home" reference, it pays to think about the things home audio takes for granted as a given.
I was considering a DCX2496 but had worries about three things. First, the pro level; second, how to get a full signal to the DCX for good digitization (i.e. keep the volume control after the digitization to avoid digitizing a signal 50 dB below max) ...
I expect people have the good sense to know if they are getting distortion and can easily tweak things to get the desired results. As I’ve noted more than once, anyone worried about the pre-outs clipping can easily use a lower setting for gain structuring.Starting at the end... If you look at part 4 in the Setting Power Amplifiers section, you see that you need to identify the max clean signal from everything up until that point first. SOP. Just turning up the levels to max isn't the way to go. Most AVR outputs will be distorting and clipping before the max anyway, so it's kind of arbitrary. You will clip the amp before the input sensitivity voltage is reached. You are tying to get around clipping the amp by raising the AVR to levels it should never normally encounter, but doesn't this still leave the amp "with the gains being set higher than they need to be?" This leaves the amount of gain the AVR can add to a channel out of the mix. With 12dB you should be alright, but what about models with only 6dB?
I would hope that people would have the good sense to re-calibrate if they make equipment changes. Not sure why you expect that they wouldn't.What happens when the same AVR and amp are used in two setups with one having low sensitivity speakers, and the other high? What happens when you switch out an amp of higher/lower power? Now add a Pro EQ, and run though again? Which one was the optimal setup?
Unfortunately, it is true. That doesn't mean it can't work fine that way, but to say it is anything but true is just wrong. Of course you have to understand what +4 dBu is.The discussion on the home audio forums, as least what I’ve seen in the past ten years, has been that pro gear requires a +4 dBU signal, and if you can’t get that from your home theater pre amp/AVR you’re going to have noise, reduced dynamic range, etc.
I was just trying to show that 0dBFS is not always something that scales properly with the rest of the signal. The voltage jump from -10dBFS to 0dBFS could be 10dB, 16dB, or 5dB because of the 0dBFS+ (I was improperly using +0dBFS before) conditions present in all digital to analogue systems. 0dBFS is absolute on the disk, but ones it leaves all bets are off. This is part of the reason why digital test signals are sent a -20dBFS to -10dBFS.You’re “mixing and matching” the digital dBFS scale with the analog Vrms scale. 0 dBFS is the highest possible digital signal; there is no such thing as “+0dBFS.” If a component somehow adds some boost to the signal in the analog domain (i.e. after the digital-to-analog conversion), that’s of no relevance. The measurable-voltage signal at the AVR’s main pre outputs will reflect that, and any voltage measurement will still be valid.
That is not how a professional would do it though. :bigsmile: There should be zero clipping. Clipping takes precedence over DNR. He bumped up his levels to get the most DNR out of the signal processor, only to overdrive his -10 dBV amp inputs. He should have accepted the DNR loss through the DCX, or added another line converter after it to reverse the one before.There is a long and established history on the home audio forums that trying to apply a pro-audio-styled gain structure protocol has caused a lot of confusion, if not out-and-out problems. As an example I recall at least one thread at AVS I came across while researching this, of a guy who had added a Behringer DCX2496 to his system. Following the pro audio protocol, he’d managed to get his levels hot enough to get a good reading on the DCX input meters (forget how he accomplished that). The result: An added 6 dB of noise (by his account), and problems clipping the inputs of his home audio amplifiers. Then there was the case I linked at the end of the article.
Those people wouldn't need to read this to know how to set things up.I expect people have the good sense to know if they are getting distortion and can easily tweak things to get the desired results.
I would suggest care in how you describe this. Talking about 1 bit of resolution in analog domain is like talking about IRE in digital video. That bit is not getting lost as resolution, but the dynamic range is diminished by the equivalent effective range. It is picking nits, but Wayne's goal here is to have a reference document that is factually correct and your posts are surely a big help in the constant process of revision and better targetting the discussion. It merits being correct and consistent in the application of terminology.Now lets say the balanced device has an input ceiling of +22dBu (very common,) and your unbalanced out can only do 5v max clean (a common LFE out number.) +22dBu is 10v, so you are 6dB closer to the noise floor than you should be, and have lost 1bit of resolution.
Remember that most of the people who will be making use of the document are not professionals but HT enthusiasts who need to better understand how to intgegrate consumer and pro equipment. The reason that this is all quite important is precisely that most do not understand gain and its relationship to SNR.That is not how a professional would do it though. :bigsmile: There should be zero clipping. Clipping takes precedence over DNR. He bumped up his levels to get the most DNR out of the signal processor, only to overdrive his -10 dBV amp inputs. He should have accepted the DNR loss through the DCX, or added another line converter after it to reverse the one before.
Those people wouldn't need to read this to know how to set things up.The point of structured gain is to never get to the distortion/clipping level, but also maximize DNR of the total system. You can miss the distortion/clipping level, and still have lowered your DNR.
Basically you're re-stating the tired max-levels theory, which I've sufficiently debunked as bogus. Pro gear, just like consumer, can be operated at any signal level, and a high or low level does not change the noise floor of the component in question. The component’s noise floor is fixed. It’s not going to increase, even if you reduce the signal down to the level of background music, or even to zero.The issues with Pro signal processing gear added in is another nut altogether.
Unfortunately, it is true [that operating pro gear at less than +4 will cause added noise and reduced dymanic range]. That doesn't mean it can't work fine that way, but to say it is anything but true is just wrong. Of course you have to understand what +4 dBu is.
Most Pro gear is set up to operate in it's optimal range at 1.228volts balanced (+4 dBu or 0 Vu.) This is not the ceiling though. It is not like 0dBFS on a DVD. This is the nominal signal. It would be like -20dB in the film world (-30dB is the HT equivalent.)
Now lets say the balanced device has an input ceiling of +22dBu (very common,) and your unbalanced out can only do 5v max clean (a common LFE out number.) +22dBu is 10v, so you are 6dB closer to the noise floor than you should be,
It sounds like you're talking about your specific equipment, which doesn't really help the rest of us. My AVR's master does not have any +dB settings, only -dB settings. If you're talking about the internal settings for the various channel's there's no universal standard as to what those figures actually reference.For a clean reference signal (assuming a fully digital connection up to the AVR,) a -6dBFS peak sine should avoid all the Nyquist headache for a single channel test. You would just bump the master volume +6dB to compensate, as this is done after the digital section. For an LFE test you would want an identical -6dBFS wave on the LFE+L+R channels, and use bass management to send everything to the sub out. This will just fill the 5dB digital headroom in the DD processor, and allow a full strength signal out of the preout with the master volume at +6.
This is without getting into distortion levels. Such as, my AVR at 0mvl(corrected) will begin to audibly distort with a simple LFE channel signal of 0dBFS(corrected) without redirected bass with a channel level gain of +4dB of +12dB possible.
Any measurements would only be relevant for the particular equipment being measured - i.e. a specific system. That’s not particularly helpful to the big picture.This is without getting into distortion levels...
I also found that around +8dB I started to get premature clipping from my amp. Unfortunately, you can not register distortion/clipping with a DMM. It requires measurement.
The point of structured gain is to never get to the distortion/clipping level, but also maximize DNR of the total system. You can miss the distortion/clipping level, and still have lowered your DNR.
I didn't find any problems with them. You are absolutely correct about the 1bit in analog comparison. I use it help help explain the difference, as I find more people get the idea better that way, but normally I add a line about it not being technically correct.Your posts are very helpful, so don't consider my comments to be purely critical. I simply want to make it more likely that the general reader gets the point.
I'm not talking about max-levels theory here, but it is similar. I was talking about a digital processor there, but lets touch the analog first.Basically you're re-stating the max-levels theory. However, pro gear, just like consumer, can be operated at any signal level, and a high or low level does not change the noise floor of the component in question. The component’s noise floor is fixed. It’s not going to increase, even if you reduce the signal down to the level of background music, or even to zero.
This is a very good step in the right direction. I would also suggest doing the same thing to the sub channel. Depending on the AVR, things can be a bit different between the LFE out, and the other channels.EDIT:Additional testing confirmed that distortion from the pre amp outputs is a valid issue. Of primary concern, when the amplifier gains are set with using the maximum pre-amp output, the amplifier gains end up being set too low. This means the pre amp has to be turned up further to get the amps going, which most likely will lead to distortion from the pre amp during normal system operations. Appropriate additions have been made to Parts 7 and 9 taking distortion into account, and how to avoid it.
This is worded a little oddly. The component noise floor is static, but the component SNR and the total system noise floor is not. If you have a low signal voltage going into a digital processor (anything really) the SNR will be lower when it hits the output stage, than a larger input signal. As a result in the next component in the chain will receive a signal with a lower SNR, and in the end usable DNR.Wayne A. Pflughaupt said:It will work just fine with a consumer signal level with no noise penalty. Any background noise it may have is fixed and will not change with signal levels, so choose your accessories carefully.
This is normal. It is a little low for it to be happening though. You have the AVR trying to put out too much voltage. I would suggest resetting all the channel levels in the AVR to 0, then recalibrate manually with an SPL meter adjusting the amps themselves instead of the AVR. You should be able to run at ~ +4dB with a 0dbFS sinewave for ~2minutes before the cut out. Then once working again, you can try to tweak things out if you want.I unplugged all my speakers and raised the L and R speaker evels on the AVR to max. I connected a DVD player with a digital coax connection and bipassed all tone and signal processing with the "pure direct" feature. I played the 60hz sine wave track. I turned up my Yamaha and got to about -7, then it promptly shut off. The voltage reading was always about 2-2.4 volts when this happened.
First off, should I be concerned about damaging the receiver? Why is it going into protection mode? Considering max volume is +16.5, I'm surprised that -7 volume is overloading it. I understand that I've also maxed out my L and R levels from 4.5 to 10, so that contributes. But I was really not expecting this shut down to happen. This receiver has given me no trouble before.
That may be part of the problem. I’d suggest setting the front L/R speakers as high as you can in the menu. Naturally, the rest of the speaker levels have to be considered, but I’d raise all the speaker levels as high as possible. For instance, whichever speaker you have set highest now – say, you have the mains set for 4.5 and rears set for 7. Assuming all the speaker adjustments have the same 1-10 scale you mentioned, I’d raise the rears +3 dB to get them up to 10, and all the other speakers +3 dB as well (i.e. 4.5 becomes 7.5). Make sense? Don’t worry about “maxing out any setting.” It’s a “chicken and eggs” thing: Lower per-channel settings in the menu will mean a higher master volume setting is required in regular use, while the reverse is true for higher per-channel menu settings. In the end, what the main volume control delivers is what matters.I rarely go past -18 on the Yamaha, and my front speaker levels are set at 4.5 on a scale 1 -10.
Not likely. Typically input sensitivity controls, if they exist, are a separate adjustment. For instance, the old Yamaha DSP-A2070 5.1 integrated amp I used back in the Dolby Pro-Logic days had a menu function that could trim adjustment for each input ± 6 dB.know I was testing with 60hz, but is it possible the Yamaha shut down because of it's input limit, not output limit? Does its volume knob simply control its own input sensitivity just like the knobs on the S-Convert and the Crown K1?
No, you aren’t going to damage anything – the protection mode will prevent that.First off, should I be concerned about damaging the receiver? Why is it going into protection mode? Considering max volume is +16.5, I'm surprised that -7 volume is overloading it. I understand that I've also maxed out my L and R levels from 4.5 to 10, so that contributes.
The Crown’s noise spec is also A-weighted, which is a disappointment. Still, in the end all that matters if noise is at what you’d consider an acceptable level.The S-Convert has a s/n ratio of 90 db
Yamaha: "s/n ratio (IHF -A Network) CD, etc (effect off, 250mv) --100db or more
Crown has >100
No, it takes a passive transformer or active circuit to convert unbalanced to balanced. It can’t be done with a cable.4) Maybe I simply need to get a rca-to-TRS interconnect? It will balance the signal? But not boost it?
I assume you were trying to do the “clean voltage” test. The S-convert could be used to reduce the signal to the amplifier further (beyond what you can get with just the amp’s gain controls), so that the speakers wouldn’t be playing so loud. This will have no effect on the “clean output” measurement you’re after.The next step was running the 1 khz sine wave through a speaker. I had to go through the S-Convert but left it at unity gain, and the Crown's sensitivity gain was just at the 2nd notch, the first notch that made the sine wave audible. I got to -34 on the AVR and just quit. It was loud enough to hurt my ears, but no harmonic overtone.
You are confusing SNR (signal to noise ratio) with dynamic range.You are correct the noise floor of each individual component does not change. The input to output ratio (or gain) does though. The idea is to have the input as high as possible to get a good SNR through the component.
Nope. The cause of the hiss is components in the signal chain with high background noise levels - IOW, poor SNR.If the signal input is too low you lower this ratio on the output, and you can't get it back. This is the cause of the dreaded hiss. The idea is to have enough signal to avoid this noise creep.
How do you propose we raise the input voltage? This is home audio, not pro audio. We don’t have the pro audio luxury of input gain controls to improve the S/N of the signal source. Instead, our front-end signal levels are fixed by the source components and AVR. The only way to increase the signal level post-AVR is to use an external device, which will also boost any background noise from the AVR by whatever amount you boost the signal. Kinda nukes the increased S/N we’re after right out of the hole.This is worded a little oddly. The component noise floor is static, but the component SNR and the total system noise floor is not.
<snip>
If you go back and raise the input voltage level 10dB, your -20dB average signal is now leaving the the component at 40dB above the noise floor. When amplified you end up with the same 80dB SPL average level with a 40dB SPL noise floor. This is why you start from the beginning, and work down the chain.
Exactly how and why is the signal automatically “amplified” as it passes from one component to the next? It won’t be automatically amplified, and it most certainly should not be manually amplified. That would be contrary to standard gain structure protocol. According to most professional references, post-pre amp signal boosting is not recommended because it will increase the noise floor from the source component and pre amp. This is supported by the Rane article referred to in Part 2 (among other sources), which notes that the only gain changes that should be effected in downstream processors is to counter what might come from the processor itself – like an overall change in signal strength from an equalizer, for instance. The signal is not – and indeed should not – be “amplified” from one component to the next. I think you’d be hard pressed to come up with any professional references that say otherwise.Lets say you are using a signal with an average level of -20dB. You run it through a component using a low voltage level, and your -20dB is now only 30dB above the noise floor. When you amplify the signal in the next component your -20dB signal is at say 80dB SPL, and the hiss from the processor is now amplified to 50dB SPL.