I wish I could have said it better! Thank you to all the evaluators for their time and money to accomplish this task, and thanks to Chashint for articulating what I could not. Great job guys.
Your prior evaluation event threads have been beyond outstanding. What the happened? Were you guys issued a legal gag order? 21 pages of nothing conclusive, but hints of both sides are right talk. Can you at least post the cumulative measured response graphs together on one graph for us pretty please. I appreciate your efforts. This thread ended up a real dissapointment for me.I wish you would have tested inefficient bookshelves as I suggested in the preparation thread. Both sets of speakers were high efficiency. Please give us the data conclusive or not. What are you waiting for? :huh: Does this mean this blind test resulted in a statistical null result?
No matter the initial conditions, variables introduced or placebos controlled, someone will always be dissatisfied with test results. That's the nature of testing. Hard data is not always the absolute arbiter of conclusion. Questionable recording practices and post-manipulation can come into play. I seriously doubt any of our panel engaged in such integrity-robbing practices. Rather, I believe they conducted themselves with the highest professionalism and exercised due diligence in set-up. Room acoustics and speaker positioning were already dialed in to the nth degree before the trials began. And why use hard-to-drive, specialty speakers unless their ownership proliferated throughout the mass market?I wish the test provided some conclusion is all. A null result is why blind testing is often criticized. I do think that a proper test method with this many amps would take more listeners, more time and a better room setup. I care less about the listener impressions than seeing the hard data in graph form. In a blind test fatigue and the stress to answer "right" bias listener results. Are we testing the amps vs listeners or the test vs. human psycology. Numbers and hard data are unquestionable. In future I would recommend finding a set of speakers known to give amps trouble at key frequencies (probably bass impedance/inductance swings) and measure output vs frequency over a specific bandwidth. Then vary the output voltages higher and run the test over and over. Some amps will simply not "wake up" speakers until a certain voltage is reached. I recommended bookshelves because bass is often not their strongsuit and poor amps can result in thin sounding speaker response. Remove the human element and the test becomes faster and conclusive. Then we can try to find out what the measurements mean in a listening experience.![]()
Yep, couldn't have said it better. But those were some pretty serious amps. Judging by the ridiculous price drops Sonnie resorted to....not the price range most of us are at. Maybe it would have been nice to throw in some more reasonably priced contenders. Coudos for making it happen though, it's nice to know the extra dollars are better spent elsewhere. At least that's what I'm taking away from it.But alas, if the levels are carefully matched and the listeners do not know which machine is powering the speakers the machines all sound the same.
Some, maybe even most, people would spend their dollars elsewhere. Some for the reasons you stated; others for reasons dealing with mob mentality. Still others place high value on certain differences, even if only perceived. So perceived or not, there's nothing wrong with someone spending more if the difference is important to them. Sure build quality, craftsmanship, and appearance play an influential role in how one amp sounds over another. Sure blind tests say otherwise. The hobby is big enough for both camps. Each just uses different machines to accomplish the same task.Chasnit wrote: Yep, couldn't have said it better. But those were some pretty serious amps. Judging by the ridiculous price drops Sonnie resorted to....not the price range most of us are at. Maybe it would have been nice to throw in some more reasonably priced contenders. Coudos for making it happen though, it's nice to know the extra dollars are better spent elsewhere. At least that's what I'm taking away from it.
Yes, I see that now. It's more complicated than just differences between machines. Two listeners have different perceptions of reality and react to stimuli differently. We can take that one step further by repeating that even one particular person may not react to an amp/song/speaker the same way from moment to moment. Psychological and physiological factors influence how we react to what we hear whether it be ABX stress or wishful thinking. For that matter, something as trivial as a grocery list can distract us from discerning differences. Welcome to the machine!A $300 AVR is not going to provide stable voltage into a difficult load. Lou you are right, in the end a machine cant tell us how another machine is interacting with our senses. But we cannot reliably predict any machine will interact between 2 different listeners the same way. I think this is why the test I mentioned is a good place to start, then listen to the key ranges once they are determined by initial screening. Then you can correlate objective to subjective tests and see if a pattern emerges. Wayne has already done this to an extent in his test of an Axiom Amplifier into speakers. He would be a key resource in designing this test.
How do you know a $300 AVR is not going to be stable into a difficult load?A $300 AVR is not going to provide stable voltage into a difficult load. Lou you are right, in the end a machine cant tell us how another machine is interacting with our senses. But we cannot reliably predict any machine will interact between 2 different listeners the same way. I think this is why the test I mentioned is a good place to start, then listen to the key ranges once they are determined by initial screening. Then you can correlate objective to subjective tests and see if a pattern emerges. Wayne has already done this to an extent in his test of an Axiom Amplifier into speakers. He would be a key resource in designing this test.
We all have our paradigms! I just bougt a very nice set of Michelins for my truck. Could have spent a lot less, but I like them!So perceived or not, there's nothing wrong with someone spending more if the difference is important to them.
This test did provide a very informative conclusion based on the original criteria. It's just not the conclusion you want it to be.I wish the test provided some conclusion is all. A null result is why blind testing is often criticized.
You want to do a specialized test to find the best amp to drive hard loads. That is not what this test was about.In future I would recommend finding a set of speakers known to give amps trouble....
Me too.BluRockinLou wrote: We all have our paradigms! I just bougt a very nice set of Michelins for my truck. Could have spent a lot less, but I like them!
You are right, as long as they are always called the same name, they could be letters, numbers, mineral names, whatever, to help remove any bias.One thing I might add is to never know which amp is which. If you built a false wall between you and the amps, and just labeled them A, B, C, D etc. Then made the list of pairings so everybody got the same exposure/chance to put through their paces. It might help to take out any preconceived expectations. No need to know which amp it is at all! Kinda simplistic untill you think about it.onder: