Pardon my dumbness but I know little of Audyssey. Are these measurements taken with all speakers running, just the front L/R, or the front L/R one at a time?
And does the system apply EQ to each speaker individually?
It measures each speaker one at a time and applies a separate EQ to each speaker as needed.
In that case I expect this is the source of the problem where imaging is concerned. Back in the mid ‘90s when I moved up to high-end equalizers I learned pretty quickly that applying different filters to the left and right speakers above about 3-400 Hz whacked out imaging. I thought perhaps it was because I only had 1/3-octave equalizers at the time, but in recent years, others here on this Forum have confirmed that even with precise parametric equalization they’ve had the same experiences as well.
What a lot of people don’t know is that equalizers accomplish response changes by introducing phase shift at and around the filter’s center frequency setting. With analog equalizers this is accomplished with capacitors and inductors, and with digital equalizers it’s done with taps on a digital delay line. But the fact is, without phase shift equalizers would not work at all. So basically, equalizers use time domain changes to accomplish frequency domain changes.
So why did the mismatched filters mess with imaging? Because a phase change was introduced to one speaker in a certain frequency range, but not the other. Either speaker played independently would sound identical because frequency response was appropriately matched, but played together you get the phase differences and the resulting dislocalized imaging. This is why I’ve typically recommended for people to use matching filters above ~3-400 Hz for the left and right speakers when performing manual equalizing.
The next issue I see, unfortunately, is the wonders of modern computing power. Before the advent of cheap processing, room analysis and equalization was limited to 1/3-octave resolution from both hardware RTA devices and equalizers. That seems quaint and outdated these days: Chris Kyriakakis claims Audyssey MultEQ XT can introduce hundreds of filters, and MultEQ XT32 thousands, to “make corrections to narrow peaks and dips in response,” in his words.
But is this really necessary? Anyone who’s ever had hands-on use of a good parametric EQ (or possibly even a cheap one) will tell you that filters with tiny bandwidth and gain settings get you nothing audible with a program signal. Has Mr. Kyriakakis spent more time in a computer lab than hands-on with actual hardware in a playback system? As I mentioned years ago in my minimal EQ article, what’s the point of peppering the signal chain with a bunch of inaudible filters?
Audyssey utilizes FIR filters which are designed to address time domain (phase) as well as frequency response issues, but the phase issues the filters address are from the room. Maybe I don’t know enough about EQ filtering (very likely), but filters that address phase generated by the room (
which isn’t necessarily all bad, by the way) seem to be of diminished effectiveness if they in turn introduce phase of their own.
Wayne’s research from his excellent
Audyssey MultEQ FAQ and Setup Guide has confirmed that multiple spread-out mic locations might give improved frequency response over a broad area, but that imaging suffers as a result. I expect the cause of the poor imaging is that this approach results in drastically mismatched filtering between the left and right channels (someone could probably loop an Audyssey-equipped AVR through REW and confirm this).
Thus Wayne’s recommendation on tight mic spacing when running Audyssey is a step in the right direction: Frequency response isn’t going to change significantly a few inches this way or that, at least above ~500 Hz. As a result, Audyssey probably performs less aggressive equalizing – hopefully only a few dozen filters instead of hundreds or thousands – and the result is filters that better match for the left and right speakers.
But is even this tight mic spacing necessary? Try this little test: Turn off the sub, disconnect the left or right speaker, play a broadband pink noise signal through the remaining one, and move your head a few inches away from, and around, the dead-center position – i.e. places where you might locate the Audyssey mic for a tight measurement pattern. Do you hear any audible change in the way the speaker sounds? Not likely, or if so not enough matter. How ‘bout when you move from the center seat on the sofa to the left or right? Probably the same. (Note, we only use one speaker for this because with both you’d hear a change with a mono pink noise signal once moved off-axis from dead center, due to timing [phase].)
The fact is that measurement mics are more sensitive to subtle location changes than the ear. And thanks to the power of cheap processing, that hypersensitivity can be translated to scary graphs that show visible changes in frequency response that the ears simply ignore. Or to a processor that generates “correction” filters to address problems that aren’t audible (introducing phase in the process). So even with the tight spacing technique, with the mic at locations where you can’t hear a difference yourself, Audyssey most likely is still introducing filtering that is offering no audible benefit – at least some, but probably a lot.
So the single-location approach to Audyssey and similar systems makes perfect sense to me. And the experiences of Wayne in his exhaustive evaluations and Talley’s experiences in this thread seem to bear this out.
I’m not saying here that Audyssey and similar systems are overblown and useless, only that some people’s experiences with them only seems to reinforce the idea that minimal EQ is more effective than hundreds or thousands of filters.
Wayne, for your next test maybe you can compare Audyssey or Dirac to manual equalization with a first-class parametric EQ and see if the former really delivers better results, either with response or imagining. :T
Regards,
Wayne