Home Theater Forum and Systems banner

Home Theater Shack 2015 High-End Amplifier Evaluation Event Reporting and Discussion Thread

75K views 250 replies 29 participants last post by  JoeGonzales 
#1 ·
Home Theater Shack 2015 High-End Amplifier Evaluation Event Reporting and Discussion Thread



:fireworks2:
:fireworks1:




This thread is a continuation of the High-End Amplifier Evaluation Event Preparations Thread previously under way.



The event has begun. Coming to you from southern Alabama, the Home Theater Shack Evaluation Team has assembled at Sonnie Parker's Cedar Creek Cinema for the 2015 High-End Amplifier Evaluation Event. We have amps, we have speakers, we have tunes, we have great eats, what more could one ask for?

Be reminded of the first law of audio evaluation event execution. They never go exactly as planned. Not everything gets there, not everything works, but you endeavor to persevere and get things done.

We have deal with speakers not able to reach us in time, with cabling issues, with equipment not interfacing properly, a laptop crash, with hums and buzzes and clicks and pops, with procedural questions - - - yet we forge ahead, adapt, evolve, redirect, and forge ahead some more - - - and the task of evaluating amplifiers is underway.

Speakers: We were unable to get the Chane A5rx-c and the Acoustic Zen Crescendo Mk II speaker pairs. We are running the Spatial Hologram M1 Turbo v2 and the Martin Logan ESL. Both are very revealing speakers, baring a lot of inner detail in our recordings. They will serve us well. The A5rx-c will be reviewed for HTS when available.

At the moment, the Holograms are serving as our primary evaluation tool. I will post setup details and interesting discoveries a little later. They are giving us a monstrous soundstage, the kind that eats small animals for breakfast, with extremely sharp imaging and very good depth acuity. They are extremely clear, getting into the realm of rivaling electrostatic transparency. Their in-room response is very good, with some expected peaks and dips, but still very listenable. The high frequency response is extended and smooth. The bass gives you that "Are you sure the subs are not on?" feeling on deeper tracks.

We decided to start with sighted comparisons and open discussion today, and blind tests tomorrow. The Audyssey XT32 / Dirac Live comparison has not been completed yet.

Have we heard differences? Yes, some explainable and some not. One amp pairing yielded differences that several evaluators are convinced they could pick in a blind AB test.

One thing I have learned for sure: The perfect complement to good southern barbeque is a proper peach cobbler. Add great company and you have a perfect get-together.

The Event
  • Date: Thursday evening, March 12th through Saturday evening, March 14th.
  • Place: Cedar Creek Cinema, Alabama, hosted by Sonnie, Angie, and Gracie Parker.
  • Evaluation Panel: Joe Alexander (ALMFamily), Leonard Caillouet (lcaillo), Dennis Young (Tesseract), Sonnie Parker (Sonnie), Wayne Myers (AudiocRaver).

The Amplifiers
  • Behringer EP2500
  • Denon X5200 AVR
  • Emotiva XPA-2
  • Exposure 2010S
  • Krell Duo 175
  • Mark Levinson 532H
  • Parasound HALO A31
  • Pass Labs X250.5
  • Sunfire TGA-7401
  • Van Alstine Fet Valve 400R
  • Wyred 4 Sound ST-500 MK II
The Speakers
  • Spatial Hologram M1 Turbo v2, courtesy Clayton Shaw, Spatial Audio
  • Martin Logan ESL
Other key equipment special for the event:
  • Van Alstine ABX Switch Box, recently updated version (February 2015)
  • miniDSP nanoAVR DL, courtesy Tony Rouget, miniDSP
  • OPPO BDP-105

As mentioned, our deepest appreciation goes to Sonnie, Angie, and Gracie Parker, our hosts, for welcoming us into their home. Look up Southern Hospitality in your dictionary, and they are (or should be) listed as prime role models thereof.

This first posting will be updated with more info and results, so check back from time to time.




Amplifier Observations
These are the observations from our notes regarding what we heard that were supported by being consistent between sighted and blind testing and across reviewers. While we failed to identify the amps in ABX testing, the raw observations from the blind comparisons did correlate in some cases to the sighted observations and with the observations of other reviewers. Take these reports for what they are, very subjective assessments and impressions which may or may not be accurate.


Denon X5200 AVR

Compared to other amps, several observations were consistent. The Denon had somewhat higher sibilance, was a bit brighter, and while it had plenty of bass it was noted several times to lack definition found in other amps. At high levels, it did seem to strain a bit more than the other amps, which is expected for an AVR compared to some of the much larger amps. Several times it was noted by multiple reviewers that it had very good detail and presence, as well as revealing ambiance in the recordings.

We actually listened to the Denon more than any other amp, as it was in four of the blind comparisons. It was not reliably identified in general, so one could argue that it held its own quite well, compared to even the most expensive amps. The observations from the blind comparisons that had some common elements either between blind and sighted comparisons or between observers are below. The extra presence and slight lack of bass definition seem to be consistent observations of the Denon AVR, but everyone agreed that the differences were not a definitive advantage to any one amp that would lead us to not want to own or listen to another, so I think we can conclude that the Denon held its own and was a worthy amp to consider.

Compared to Behringer
- bass on Denon had more impact than Behr, vocals sounded muted on Behr
- vocals sounded muted on ML compared to Denon
- Denon: crisp highs preferred compared to Behringer which is silky.
- Denon is more present, forward in mids and highs than Behringer.

Compared to Mark Levinson
- Denon seemed to lack low end punch compared to ML.
- Denon is smooth, a certain PUSH in the bass notes, cellos & violins sounded distant, hi-hat stood out, distant vocal echo stood out, compared to ML.
- Denon bass seemed muddy compared to ML which is tighter.
- ML more distant strings than Denon.
- Denon is slightly mushy and fat in bass. String bass more defined on ML.
- ML seems recessed compared to Denon.

Compared to Pass
- vocals sounded muffled on Pass compared to Denon
- crisp bass on Denon compared to Pass
- Denon & Pass both even, accurate, transparent, natural, no difference, like both
- Pass seems soft on vocals but very close.
- Denon has a bit more punch on bottom, maybe not as much very deep bass, more mid bass.

Compared to Van Alstine
- bass on Chant track was crisp for VA while Denon was slightly sloppy
- sibilance not as pronounced on VA as it was on Denon
- VA super clarity & precision, detailed, space around strings, around everything compared to Denon which is not as clear, liked VA better.
- sibilanceon Denon, VA has less “air” but more listenable, both very good
- Very deep bass more defined on VA, overall more bass on Denon.


Wyred 4 Sound ST-500 MK II

In the sighted listening we compared the ST-500 MK II to the Van Alstine Fet Valve 400R. The assessments varied but were generally closer to no difference. The Van Alstine got comments of being fatter on the bottom. The Wyred 4 Sound was noted to have slightly better bass definition but apparently less impact there, and slightly less detail in the extreme highs. Most comments about the midrange were not much, if any difference. An interesting observation here was by Wayne, noting that he did not think he would be able to tell the difference in a blind comparison. Considering the ST-500 MK II is an ICE design and the Fet Valve 400R is a hybrid, we expected this to be one of the comparisons that would yield differences if any. As I am always concerned about expectation bias, this was one that I was particularly concerned with. Van Alstine is a personal favorite for a couple of us so I expected a clear preference for it to be present in the sighted comparison. I felt that the Wyred 4 Sound amp help its own with the much more expensive and likely to be favored VA.

In the blind comparisons, we compared the ST-500 MK II to the Emotiva XPA-2 and the Sunfire TGA-7401 in two separate sessions. Of course, in these sessions we had no idea what we were listening to until after all the listening was done. In the comparison to the Emotiva, some notes revealed not much difference and that these were two of the best sounding amps yet. The ST-500 MK II was noted to have the best midrange yet, along with the Emotiva. It was described as having less sibilance than both the Emotiva and Sunfire. Both the Emotiva and the ST-500 MK II were described as unstrained in terms of dynamics. In comparison to the Emotiva it was noted to have solid highs, lively dynamics, rich string tones, and punch in the bass. The overall preference in comparison to the Emo was either no difference to preferring the W4S.

In comparison to the Sunfire, comments ranged from preference for the W4S to not much difference to preference for the Sunfire. The Sunfire was described as having more presence in the midrange, while the Wyred was noted to be shrill, lifeless, and hollow by comparison.

These comments varied a lot, but the points of convergence were generally around the similarities to three amps that would be expected to be most likely to be different, if we found any differences at all. The objective results is that we failed to identify the amp in ABX comparisons to two other much more expensive amplifiers. I would have to conclude that based on the results, the ST-500 MK II represents one of the best values and certainly should satisfy most listeners.​





Audyssey XT32 vs. Dirac Live Listening Comparison

Last year HTS published a review of a the miniDSP DDRC-22D, a two-channel Dirac Live Digital Room Correction (DRC) product. The review included a comparison to Audyssey XT. A number of readers requested a comparison of Dirac Live with Audyssey XT32. That comparison was recently completed during the Home Theater Shack High-End Amplifier Evaluation Event at Sonnie Parker's Cedar Creek Cinema in rural Alabama. This report provides the results of that comparison.

Go to the Audyssey XT32 vs. Dirac Live Listening Comparison Report and Discussion Thread.


Spatial Hologram M1 Turbo Speakers

I was very pleased with the Spatial Hologram M1 speakers we used for the amplifier evaluation, and felt that they more than fulfilled our needs. They did not become "gotta have them" items for any of the evaluators, although I had thoughts in that direction once or twice. But they were speakers we could easily ignore through the weekend. I mean this as a high complement. Never did an evaluator complain that the M1 speakers were "in the way" or "holding us back," and we were able to focus on the task at hand unhindered. That alone means a lot, and may say more about them than the rest of the review just completed.

Here is what they did for us:
  • Because of their high efficiency, amplifiers were not straining to deliver the volumes we called for. We could be confident that the amps were operating in their linear ranges and that if we heard a difference it was not due to an amp being overdriven.
  • The stretched-out soundstage opened up a lot of useful detail for us to consider in our evaluations. In discussing the soundstage at one point, there was a consensus that it might be stretched a little too far and might be "coming apart at the seams," showing some gaps, although this did not hinder our progress. My final assessment is that this was not the case, all due respect to the fine ears of the other evaluators. I elaborate on this point in the M1 Review.
  • They served well as a full-range all-passive speaker, able to reach deep and deliver 40 Hz frequencies with lots of clean "oomph," all without the need for DSP boosting and without subwoofer support.
I thoroughly enjoyed spending time with them, and wish to again thank Clayton Shaw of Spatial Audio for loaning them to us. A complete review of the M1 speakers has been posted.

Go to the Spatial Hologram M1 Turbo Version 2 Speaker Review.


A Soundstage Enhancement Experience

Sonnie's MartinLogan ESL hybrid electrostatics were set up very nicely when we arrived, so we avoided moving them through the weekend. There were some improvements made to the soundstage and imaging by way of treatments, and some interesting twists and turns along the way which turned out to be very informative.

I have documented the exercise in a separate post.

Go to the Soundstage Enhancement Experience thread.
 
See less See more
3
#181 ·
I think it really depends on how one listens and the speakers. Frankly, I have found my Onkyo receiver to be fine for my HT system, but I rarely listen very loud. I have an amp driving the subs and can play the system loud enough that the rattles in the house become the limiting factor. I run the 2 channel system full range, so it would be more of an issue, I think, but I have components for that system anyway. At some point I want to do some blind comparisons with the Onkyo and my components on that system.
 
#182 · (Edited)
I agree the speakers are the deal. I A/B'd my Arx1'c with my Klipsch KSF 8.5 speakers & the power requirement was bigger than I expected. And those are the bookshelves...the A5's may need even more. If I remember correctly, we used a 100 watt amp in the 1st eval, and that seemed to drive them well.
 
#184 ·
Leonard has the patience of a Tibetan Monk, only he's a lot funner to be around. (All due respect to Tibetan Monks - actually I have never met one, they might be quite the party boys, although somehow I doubt it.)

I sent him mdat files containing the Behringer amp data. So much to keep track of, I probably missed them before somehow.

Thanks, Leonard, for sifting through all that data. Fantastic job!
 
#189 ·
Did y'all see this ?

Denon X5200 AVR
Compared to other amps, several observations were consistent. The Denon had somewhat higher sibilance, was a bit brighter, and while it had plenty of bass it was noted several times to lack definition found in other amps. At high levels, it did seem to strain a bit more than the other amps, which is expected for an AVR compared to some of the much larger amps. Several times it was noted by multiple reviewers that it had very good detail and presence, as well as revealing ambiance in the recordings.

We actually listened to the Denon more than any other amp, as it was in four of the blind comparisons. It was not reliably identified in general, so one could argue that it held its own quite well, compared to even the most expensive amps. The observations from the blind comparisons that had some common elements either between blind and sighted comparisons or between observers are below. The extra presence and slight lack of bass definition seem to be consistent observations of the Denon AVR, but everyone agreed that the differences were not a definitive advantage to any one amp that would lead us to not want to own or listen to another, so I think we can conclude that the Denon held its own and was a worthy amp to consider.

Compared to Behringer
- bass on Denon had more impact than Behr, vocals sounded muted on Behr
- vocals sounded muted on ML compared to Denon
- Denon: crisp highs preferred compared to Behringer which is silky.
- Denon is more present, forward in mids and highs than Behringer.

Compared to Mark Levinson
- Denon seemed to lack low end punch compared to ML.
- Denon is smooth, a certain PUSH in the bass notes, cellos & violins sounded distant, hi-hat stood out, distant vocal echo stood out, compared to ML.
- Denon bass seemed muddy compared to ML which is tighter.
ML more distant strings than Denon.
- Denon is slightly mushy and fat in bass. String bass more defined on ML.
- ML seems recessed compared to Denon.


Compared to Pass
- vocals sounded muffled on Pass compared to Denon
- crisp bass on Denon compared to Pass
- Denon & Pass both even, accurate, transparent, natural, no difference, like both
- Pass seems soft on vocals but very close.
- Denon has a bit more punch on bottom, maybe not as much very deep bass, more mid bass.


Compared to Van Alstine
- bass on Chant track was crisp for VA while Denon was slightly sloppy
- sibilance not as pronounced on VA as it was on Denon
- VA super clarity & precision, detailed, space around strings, around everything compared to Denon which is not as clear, liked VA better.
- sibilanceon Denon, VA has less “air” but more listenable, both very good
- Very deep bass more defined on VA, overall more bass on Denon.



Read more: http://www.hometheatershack.com/for...eporting-discussion-thread.html#ixzz3YFvULf3M
 
#190 ·
I have been having some issues with editing the first thread so I'll start posting more results here and when we sort it out, we will include them there as well.

Wyred 4 Sound ST-500 MK II

In the sighted listening we compared the ST-500 MK II to the Van Alstine Fet Valve 400R. The assessments varied but were generally closer to no difference. The Van Alstine got comments of being fatter on the bottom. The Wyred 4 Sound was noted to have slightly better bass definition but apparently less impact there, and slightly less detail in the extreme highs. Most comments about the midrange were not much, if any difference. An interesting observation here was by Wayne, noting that he did not think he would be able to tell the difference in a blind comparison. Considering the ST-500 MK II is an ICE design and the Fet Valve 400R is a hybrid, we expected this to be one of the comparisons that would yield differences if any. As I am always concerned about expectation bias, this was one that I was particularly concerned with. Van Alstine is a personal favorite for a couple of us so I expected a clear preference for it to be present in the sighted comparison. I felt that the Wyred 4 Sound amp help its own with the much more expensive and likely to be favored VA.

In the blind comparisons, we compared the ST-500 MK II to the Emotiva XPA-2 and the Sunfire TGA-7401 in two separate sessions. Of course, in these sessions we had no idea what we were listening to until after all the listening was done. In the comparison to the Emotiva, some notes revealed not much difference and that these were two of the best sounding amps yet. The ST-500 MK II was noted to have the best midrange yet, along with the Emotiva. It was described as having less sibilance than both the Emotiva and Sunfire. Both the Emotiva and the ST-500 MK II were described as unstrained in terms of dynamics. In comparison to the Emotiva it was noted to have solid highs, lively dynamics, rich string tones, and punch in the bass. The overall preference in comparison to the Emo was either no difference to preferring the W4S.

In comparison to the Sunfire, comments ranged from preference for the W4S to not much difference to preference for the Sunfire. The Sunfire was described as having more presence in the midrange, while the Wyred was noted to be shrill, lifeless, and hollow by comparison.

These comments varied a lot, but the points of convergence were generally around the similarities to three amps that would be expected to be most likely to be different, if we found any differences at all. The objective results is that we failed to identify the amp in ABX comparisons to two other much more expensive amplifiers. I would have to conclude that based on the results, the ST-500 MK II represents one of the best values and certainly should satisfy most listeners.
 
#193 ·
I have a few notes left, but there is not a lot of correllation left to justify saying much in the same way that I have for the ones thus far. I may take my observations and put them in a new post dedicated to purely subjective and uncorrelated comments. There are a couple of things left to add on the blind listening, but frankly I was hoping to see some discussion of what I posted to help determine what people are interested in or think is valuable from the stuff that we already have. There has not been much discussion at all since we posted what we did.
 
#194 ·
Sufficiently stimulated by Leonard's above comments, I will try to further this discussion......

1. It seems in this comparison that there are small differences between amps, some of which may be consistently observed and thus may be preferred by certain listeners that wish their amps to "perform" in certain ways ( better bass control, more detailed midrange, etc).....it may help others trying to trial or purchase said amps if multiple listeners noticed a consistent feature associated with them to use as a criteria for purchase, if possible.....besides the comments made for the Denton and the Wyred Sound amps ( and indirectly the VanAlstine), any other aspects of the other amps on sighted evaluation stick out?

2. It seems the need for headroom or increased power does come into play at louder volumes approaching reference levels and beyond, especially with less efficient speakers. It also seems at lower volumes this need becomes less so. Is this kosher? Those of us just listening at moderate to low volumes can probably use the Denton to excellent effect and obviate the need for an additional amp. Those wanting to experience "live venue" sound or "movie theater" volumes probably need an amp with more power....which any of the other amps in this comparison could provide...

3. With the use of powered subwoofers in multichannel audio systems and home theater applications lowering the power need for the rest of the system, is an amp providing multiple hundreds of watts/channel really needed? By the by, how much IS the power lowered in a system by using a subwoofer to reproduce the lower octaves?

Hope this helps to reenervate this thread....
 
#195 ·
I don't see consistency in the comments indicating any particular sound signature for any particular amplifier. If part of the test included switching between the same amplifier against itself and the listeners made comments about differences heard in that scenario that would be particularly interesting as would comments indicating the same amplifier switched against itself sounds exactly the same. If the amplifier compared to itself sounded different to the listeners then clearly expectation bias is affecting what is being heard. If that amplifier compared to itself was deemed identical but differences were identified when compared to other amps then clearly there are differences and that would be a very interesting observation. My conclusion from the published notes is, in a real world listening environment if sound quality improvement is the goal of spending money then the money is best spent on speakers or possibly acoustic treatments.
 
#197 ·
Charlie, I agree wholeheartedly with your comments, and, yes, there does not seem to be any consistent comments about a "sound signature" with any of the amps noted ......just wondering if there were any consistencies with any of the other amps not already discussed in detail...
 
#198 ·
Hence, my problem. What to report about the experience with each amp. The comments about what we heard don't have much consistency left. Yet there were some consistent comments across the sighted and blind tests and between listenters. These are mostly in the amps where there were multiple comparisons. I think this points to the need for more extensive listening and more trials on a particular amp to identify any potential differences.

So are the uncorrellated comments on each amp of value? Considering how things are taken out of context on the internet, I am very hesitant to publish them, as those who have an agenda to promote could easily use them for such, and attribute statements to us that out of the context of the whole event may not be our intent.
 
#199 ·
Totally understand your comments, Leonard.....too bad everyone couldn't take off work for 2 weeks to really flesh out any consistencies between amps by having those multiple trials....think such a study would qualify for a government grant under the auspices of audio health??
 
#200 ·
I truly enjoyed this evaluation.
It was (is) an undertaking of significant efforts from all of the participants and those efforts are fully appreciated.
If Sonnies' sell off of gear is any indicator... he took on significant $$$$ to get this accomplished, special thanks goes to him.
I can only guess that report writing straws were drawn and you ended up with the short straw because I cannot see anyone willingly saying, "Oh oh me me, please let me write it up".
This is pretty much an impossible task, which I think you have handled quite nicely.
Yes, I am curious to know every comment that was made. But I was not there, I heard none of it. One off comments with no other correlations serve little purpose.
I would let the published results stand as is, and be proud of the effort.
 
#203 ·
Your prior evaluation event threads have been beyond outstanding.

What happened? Were you guys issued a legal gag order? 21 pages of nothing conclusive, but hints of both sides are right talk.

Can you at least post the cumulative measured response graphs together on one graph for us pretty please. I appreciate your efforts. This thread ended up a real dissapointment for me. :(

I wish you would have tested inefficient bookshelves as I suggested in the preparation thread. Both sets of speakers were high efficiency.

Please give us the data conclusive or not. What are you waiting for? :huh: Does this mean this blind test resulted in a statistical null result?
 
#205 ·
Your prior evaluation event threads have been beyond outstanding. What the happened? Were you guys issued a legal gag order? 21 pages of nothing conclusive, but hints of both sides are right talk. Can you at least post the cumulative measured response graphs together on one graph for us pretty please. I appreciate your efforts. This thread ended up a real dissapointment for me. :( I wish you would have tested inefficient bookshelves as I suggested in the preparation thread. Both sets of speakers were high efficiency. Please give us the data conclusive or not. What are you waiting for? :huh: Does this mean this blind test resulted in a statistical null result?

I think this is being kinda rough on these guys.
This evaluation produced the same results as ALL other level matched amplifier evaluations.
Since you are disappointed/dissatisfied with it, perhaps you will put together your own amplifier comparison and post the methods/results here for critique.
 
#204 ·
See post 177. There were virtually no measureable differences that amounted to anything in either frequency response or impulse response. I have spent hours trying to find something and it just is not there. I really expected to find something in the impulse response, but it was actually more similar than the frequency response measures.

The bottom line is that we were not able to reliably detect differences nor reliably identify differences in ABX comparisons. the things that we made notes that we heard that had any correllation at all across listeners and sessions are detailed in the first post. They were mostly about the amps that we heard in more than one comparison, which leads me to believe that with more focused listening on less amps that we might be able to find some diffenences that hold up. Frankly, we probably had too many amps and tried to do too much. The next time we probably won't have more than 2 or 3.

Sorry to disappoint, but it is what it is. I don't think that it is fair to post every comment we wrote down on every amp that is uncorrellated. I am still uneasy with posting as much as I did because there may have been only one comment that supported each, and that is far from reliable. We can conclude that what we THINK we hear is highly variable and suspect.

We will have a much better idea about how to approach it next time.
 
#206 ·
I wish the test provided some conclusion is all. A null result is why blind testing is often criticized. I do think that a proper test method with this many amps would take more listeners, more time and a better room setup. I care less about the listener impressions than seeing the hard data in graph form. In a blind test fatigue and the stress to answer "right" bias listener results. Are we testing the amps vs listeners or the test vs. human psycology.

Numbers and hard data are unquestionable. In future I would recommend finding a set of speakers known to give amps trouble at key frequencies (probably bass impedance/inductance swings) and measure output vs frequency over a specific bandwidth. Then vary the output voltages higher and run the test over and over. Some amps will simply not "wake up" speakers until a certain voltage is reached. I recommended bookshelves because bass is often not their strongsuit and poor amps can result in thin sounding speaker response.

Remove the human element and the test becomes faster and conclusive. Then we can try to find out what the measurements mean in a listening experience. :)
 
#207 ·
I wish the test provided some conclusion is all. A null result is why blind testing is often criticized. I do think that a proper test method with this many amps would take more listeners, more time and a better room setup. I care less about the listener impressions than seeing the hard data in graph form. In a blind test fatigue and the stress to answer "right" bias listener results. Are we testing the amps vs listeners or the test vs. human psycology. Numbers and hard data are unquestionable. In future I would recommend finding a set of speakers known to give amps trouble at key frequencies (probably bass impedance/inductance swings) and measure output vs frequency over a specific bandwidth. Then vary the output voltages higher and run the test over and over. Some amps will simply not "wake up" speakers until a certain voltage is reached. I recommended bookshelves because bass is often not their strongsuit and poor amps can result in thin sounding speaker response. Remove the human element and the test becomes faster and conclusive. Then we can try to find out what the measurements mean in a listening experience. :)
No matter the initial conditions, variables introduced or placebos controlled, someone will always be dissatisfied with test results. That's the nature of testing. Hard data is not always the absolute arbiter of conclusion. Questionable recording practices and post-manipulation can come into play. I seriously doubt any of our panel engaged in such integrity-robbing practices. Rather, I believe they conducted themselves with the highest professionalism and exercised due diligence in set-up. Room acoustics and speaker positioning were already dialed in to the nth degree before the trials began. And why use hard-to-drive, specialty speakers unless their ownership proliferated throughout the mass market?

On one hand, you ask for hard data, but on the other you speak of "waking up" speakers; Where's the hard data for that? Remove the human element, and you have (drum roll, please) The Terminator Syndrome: machines measuring machines producing physical phenomena for other machines. The results can hardly be soothing. Right, Ahhnolt?

Sent from my iPad using HTShack
 
#208 ·
Whatever the shortcomings of blind listening tests are I have yet to see any other method proposed that is better.
If the proposal is to let measurements be the end of the argument then the electrical measurements on the many and various amplifiers that have been published in HiFi magazines should (if you understand basic electronics and orders of magnitude) lead to the conclusion that properly functioning amplifiers that are not overdriven will sound so similar that they are unidentifiable from each other in listening tests.

Every time any group starts an amplifier "shootout" there seems to be a groundswell of hope that there will finally be a conclusion that's different from all of the other controlled amplifier listening efforts that have come before.
But alas, if the levels are carefully matched and the listeners do not know which machine is powering the speakers the machines all sound the same.
 
#209 · (Edited)
Chasnit wrote:

But alas, if the levels are carefully matched and the listeners do not know which machine is powering the speakers the machines all sound the same.
Yep, couldn't have said it better. But those were some pretty serious amps. Judging by the ridiculous price drops Sonnie resorted to....not the price range most of us are at. Maybe it would have been nice to throw in some more reasonably priced contenders. Coudos for making it happen though, it's nice to know the extra dollars are better spent elsewhere. At least that's what I'm taking away from it.
 
#211 ·
Chasnit wrote: Yep, couldn't have said it better. But those were some pretty serious amps. Judging by the ridiculous price drops Sonnie resorted to....not the price range most of us are at. Maybe it would have been nice to throw in some more reasonably priced contenders. Coudos for making it happen though, it's nice to know the extra dollars are better spent elsewhere. At least that's what I'm taking away from it.
Some, maybe even most, people would spend their dollars elsewhere. Some for the reasons you stated; others for reasons dealing with mob mentality. Still others place high value on certain differences, even if only perceived. So perceived or not, there's nothing wrong with someone spending more if the difference is important to them. Sure build quality, craftsmanship, and appearance play an influential role in how one amp sounds over another. Sure blind tests say otherwise. The hobby is big enough for both camps. Each just uses different machines to accomplish the same task.

Sent from my iPad using HTShack
 
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top