Post by KeithL on Mar 14, 2013 10:19:04 GMT -5
The basic fact remains that, while ABX testing is indeed flawed, every other method is far MORE flawed. For every other method of testing, you can take all the flaws of ABX testing, and add a lot more. (This makes sense because ABX testing was developed as a way to limit the flaws and inconsistencies in testing as much as possible.)
As I'm sure some wise philosopher once said:
When faced with being unable to achieve perfection, you'll get the best result by choosing the product or method that is the least imperfect.
Even though we know, and have known for years, that eyewitness testimony is very unreliable, we still use it - UNLESS we have some decent camera footage or some other more reliable information. Likewise, if you can't manage to run a real ABX test, then sighted testing is the next best thing. And if you're trying to compare some new acquisition to something you heard last month which isn't available anymore, you don't have much choice. However, since ABX testing really isn't that hard to do, it seems to make sense for at least serious reviews to go with the least flawed method.
This is the real point here.....
It's perfectly fine for me to tell my friends that, as far as I can remember, my new DAC sounds a bit cleaner in the high end than the one I got rid of last week. However, they're not paying me to give them accurate and unbiased reviews. Now, in purely subjective situations, it's also not unreasonable to give weight to what your favorite reviewer says he likes - once you have determined that his tastes usually align well with yours. If he likes it, the you probably will too.
However, his preferences should be clearly labelled as such, and not "sold" as "facts". I have no problem whatsoever if you choose to accept some reviewers opinion over properly determined and vetted facts. My only problem is that many people seem unable to distinguish between the two - and many magazines tend to do their best to promote the confusion rather than dispel it.
In fact, if, as you claim, we're prone to hearing things that aren't there, and remembering things we didn't even hear, then how can we place ANY credence on anything that a tester or reviewer says UNLESS THEY HAVE EVERY PRODUCT THEY'RE DISCUSSING DIRECTLY IN FRONT OF THEM, and unless we can also eliminate every possible bias. In short, if you're right, then we need to throw out ALL reviews except those based on ABX tests, run in real time, within a minute or two, with no memory involved - because memory itself cannot be trusted.
Alternately, are you suggesting that we should USE and INCLUDE the flaws in the test. If that's the case then, when we're comparing an expensive amp in a fancy case to one in a cheap plastic box, we should show the listeners the expensive box before they listen to both amps, so they have the SAME BIAS when listening to both.
However, by letting them see both boxes, in a "sighted test", we're just encouraging them to "listen with their eyes" - in which case we're comparing fancy cases and not sound. Personally, when I'm talking about audio equipment, I prefer to judge it on sound. (I may sometimes choose to pay extra for a fancy case or a nice control panel, but I want to KNOW what I'm paying for...... )
An honest reviewer would say things like: "In a blind test, I couldn't hear the difference, but, when I looked at them AFTER the listening test, I decided I really liked the feel of the controls on Product X a lot better than those on Product Y - and, to me, that's worth the extra cost." But he (or she) wouldn't risk being biased by how they felt about the looks until AFTER they'd done the listening test. When I hear someone go on for pages about how well made something is, followed by their repeating a lot of pseudo-scientific propaganda about why the manufacturer says it should sound good, followed at the very end by their actually listening to it.... I can't even guess how much their obvious biases have affected their judgment - and I don't bother to try (I assume they're biased and ignore or discount most of what they say).
Not only did he claim that a sound perceived seconds before can cause us not to perceive another sound seconds later, but also that a sound perceived seconds before can cause us to DO perceive another sound seconds later ("perceive" as opposed to "hear", i.e. due to the fact our memory, among other things, shapes our perception of what we hear). Every expert in auditory neuroscience (every sane expert in auditory neuroscience, that is...) will tell you there's nothing questionable about this claim.
This example Bob Stuart described in the TAS interview shows that our memory CAN cause us to hear sounds that aren't actually there :
As a matter of fact, that is exactly my whole point to begin with. ABX testing cannot cause memory feedback to work differently, which is the very reason why ABX testing is flawed. For ABX testing to not be flawed, you'd have to fully eliminate memory feedback altogether. Hence that sentence "If you had the memory of a goldfish, maybe it would work" :
The earth once used to be flat until proven round. Today, however, that doesn't make the earth any less round.
As I'm sure some wise philosopher once said:
When faced with being unable to achieve perfection, you'll get the best result by choosing the product or method that is the least imperfect.
Even though we know, and have known for years, that eyewitness testimony is very unreliable, we still use it - UNLESS we have some decent camera footage or some other more reliable information. Likewise, if you can't manage to run a real ABX test, then sighted testing is the next best thing. And if you're trying to compare some new acquisition to something you heard last month which isn't available anymore, you don't have much choice. However, since ABX testing really isn't that hard to do, it seems to make sense for at least serious reviews to go with the least flawed method.
This is the real point here.....
It's perfectly fine for me to tell my friends that, as far as I can remember, my new DAC sounds a bit cleaner in the high end than the one I got rid of last week. However, they're not paying me to give them accurate and unbiased reviews. Now, in purely subjective situations, it's also not unreasonable to give weight to what your favorite reviewer says he likes - once you have determined that his tastes usually align well with yours. If he likes it, the you probably will too.
However, his preferences should be clearly labelled as such, and not "sold" as "facts". I have no problem whatsoever if you choose to accept some reviewers opinion over properly determined and vetted facts. My only problem is that many people seem unable to distinguish between the two - and many magazines tend to do their best to promote the confusion rather than dispel it.
In fact, if, as you claim, we're prone to hearing things that aren't there, and remembering things we didn't even hear, then how can we place ANY credence on anything that a tester or reviewer says UNLESS THEY HAVE EVERY PRODUCT THEY'RE DISCUSSING DIRECTLY IN FRONT OF THEM, and unless we can also eliminate every possible bias. In short, if you're right, then we need to throw out ALL reviews except those based on ABX tests, run in real time, within a minute or two, with no memory involved - because memory itself cannot be trusted.
Alternately, are you suggesting that we should USE and INCLUDE the flaws in the test. If that's the case then, when we're comparing an expensive amp in a fancy case to one in a cheap plastic box, we should show the listeners the expensive box before they listen to both amps, so they have the SAME BIAS when listening to both.
However, by letting them see both boxes, in a "sighted test", we're just encouraging them to "listen with their eyes" - in which case we're comparing fancy cases and not sound. Personally, when I'm talking about audio equipment, I prefer to judge it on sound. (I may sometimes choose to pay extra for a fancy case or a nice control panel, but I want to KNOW what I'm paying for...... )
An honest reviewer would say things like: "In a blind test, I couldn't hear the difference, but, when I looked at them AFTER the listening test, I decided I really liked the feel of the controls on Product X a lot better than those on Product Y - and, to me, that's worth the extra cost." But he (or she) wouldn't risk being biased by how they felt about the looks until AFTER they'd done the listening test. When I hear someone go on for pages about how well made something is, followed by their repeating a lot of pseudo-scientific propaganda about why the manufacturer says it should sound good, followed at the very end by their actually listening to it.... I can't even guess how much their obvious biases have affected their judgment - and I don't bother to try (I assume they're biased and ignore or discount most of what they say).
The burden of proof rests with he who holds the affirmative, Yves. That means that if Dr Stuart contends that a sound heard seconds before can cause us not to hear another sound seconds later, it is up to him to produce evidence for that claim.
This example Bob Stuart described in the TAS interview shows that our memory CAN cause us to hear sounds that aren't actually there :
As a matter of fact, that is exactly my whole point to begin with. ABX testing cannot cause memory feedback to work differently, which is the very reason why ABX testing is flawed. For ABX testing to not be flawed, you'd have to fully eliminate memory feedback altogether. Hence that sentence "If you had the memory of a goldfish, maybe it would work" :
You've advanced a claim which, apparently, no conceivable empirical observation could show to be false. That means it has zero information content, i.e., the world would look exactly the same whether it were true or false.
The earth once used to be flat until proven round. Today, however, that doesn't make the earth any less round.