KeithL
Administrator
Posts: 10,273
|
Post by KeithL on Dec 28, 2016 10:35:42 GMT -5
Thanks.... and, yeah, you caught me..... To me, it's a little bit like if someone were to start a thread entitled: "Why Santa Claus is absolutely positively real......" - I would defend anyone's right to believe whatever they like (while pointing out that there is in fact a literal truth). - I might argue that "to kids he's real". - I might suggest that "he's a real fictional character". - I might even agree that, in one sense, every person running around in a Santa suit is "a real Santa" (they're there, and they're real). - I might suggest that "he's a real conceptual meme that represents certain religious myths and commercial sentiments". However, underneath it all, you're not going to convince me that there's really a jolly fat fellow riding around on a flying sled. And, in that same context, I'm quite certain that double blind testing has a lot of value in many situations (whether you think it's the be-all and end-all of testing or not). I'm going to agree with both sides here...... ..................... I have to compliment Keith in this very non-confrontational/diplomatic post as I believe he is very kindly avoiding choosing sides. However, just reading his first paragraph clearly indicates he is generally in favor of many blind tests which is directly in conflict with the title of this thread: Why double-blind testing is completely worthless. In the second paragraph he changes topics when talking about choosing audio equipment on other factors as well as how they sound, such as looks, control feel and features. I have in one of these recently connected threads mentioned that I actually own the Emo XDA-2. I did lots of study and consulting with others before I ordered it. I decided I wanted it whether I would be able to hear any sound advantages over the PC's DAC or that of my ERC-2. I was happy to make it my "control center" for my PC sound system due to its ability to control both the Airmotiv 4's plus the Mirage 8" sub with good connections, blending and volume control with remote. It of course accommodated my good headphones. It would also accommodate an external CD player from the PC if desired. Much of the listening thru the speakers is near field when sitting at my computer desk, but many times I listen when away from the desk and up to 20 feet away in the adjacent kitchen or living room, rather than turning on my main system. The convenience of the remote control makes it a strong buying factor when I'm away from the computer desk. I did in fact do some fairly extended sound comparisons and generally found little if any significant sound differences, but seemed to hear better sound with the ASRC. I highly value my XDA-2 for other than sound reasons but perhaps a little sound improvement though quite subtle.
|
|
KeithL
Administrator
Posts: 10,273
|
Post by KeithL on Dec 28, 2016 11:54:19 GMT -5
The big problem with a lot of generalizations is that they are only usually true - but aren't always true. The generally accepted issue with level matching and double blind testing it NOT that differences in level are audible. The issue is that slight differences in level that are NOT audible can be perceived as differences in sound quality. So the louder one doesn't sound louder - but it sounds "slightly better" - which skews the comparison. It's kind of like how you might not notice if your "white" room has a slight blue tint or a slight yellow tint; but the one with the blue tint seems "cooler", while the one with the yellow tint seems "warmer". The real issue is that, while we know such things happen, they can affect different people differently, so it's difficult to rule them out. The other thing to remember when talking about loudness is that loudness and perceived loudness are both time and frequency dependent. For example, if I were to play a test tone, and gradually increase the level by 1 dB over several seconds you almost certainly wouldn't notice. However, if I were to switch the test tone up and down in 1 dB steps...... BEEP..beep..BEEP..beep..BEEP it would be obvious. Likewise, if you had a speaker whose response goes up and down over a range of +/- 1 dB it would sound quite neutral. But, If I were to play a perfectly smooth sweep tone, you might notice that the tone "warbles" if the variations occur at frequencies near each other. And, if you like listening to vocals, and those variations fall inside the vocal range of your favorite singer, you might notice.... But, if you prefer instrumental jazz, you might not notice at all.... And, again, with frequency. My guess is that most of us can distinguish frequencies that alternate much better than ones that gradually shift. There's a big difference between listening to two tones, one after the other, and identifying whether they're the same or different, and telling the difference between a steady tone and one that warbles - even a little bit. If I get a chance I'll try and make up some test tones.... to demonstrate/test some of these things..... The problem with "introducing minor imperfections to a DAC" is that there are a HUGE number of ways in which a conversion could be flawed. For example, most of you probably know that "jitter" refers to imperfections in timing between digital audio samples (so the sample arrives earlier or later than it should). However, in reality, it's MUCH more complicated. - Is the signal variation random, or does it follow a pattern of some sort? - If random, what is the distribution of the random errors? (Apparently errors that are "data correlated", which means they're related to the music that's playing, are more audible.) - If they follow a pattern, what is the pattern? (Does the timing vary from correct according to a sine wave, or a square wave, or a triangle wave?) - If it's a pattern, then what is the frequency of the pattern. (Studies have shown that jitter occurring at certain frequencies is far more noticeable than at others.) - And which DAC are you using? (The mechanisms used by various DACs to eliminate jitter often work well at some frequencies and poorly at others.) So which aspect of possible flaws would you like to test today? As someone once said long ago in another context: "There is only one right, but there are an infinite number of ways to stray from it." (Luckily, with any reasonably well designed DAC, the errors tend to be very small...... so we're probably better off trying to reduce them to total inaudibility rather than characterize them.) Just out of curiosity, and I haven't read this entire thread, has a DBT ever been done when minor imperfections were deliberately introduced in the DACs? So, 2 versions of DC-1, with one of them having a small (and variable) amount of conversion imperfection (not sure if this is easily possible). At what percentage of difference between the 2 do trained (and non-trained) listeners reliably identify difference (preference is immaterial). That is a good question IMO, and of course it is hypothetical, no problem. I'm not sure if anyone here could give a definitively average answer. I would think a decibel difference might be much easier than trying to figure out a percentage for loudness variations. Usually I have read the average Joe can detect down to the area of about 1dB difference in the 1kHz-4kHz range, when we are talking about loudness. Other sources which I think are perhaps more correct is that those with excellent hearing can detect down to 0.5 decibels range. These are in the mid range where our hearing is most sensitive. When talking about frequencies, I have read that the average Jane can distinguish frequencies that are about 0.25% (or so) apart. Some folks have far superior perfect pitch than others. Many times our ears are even less accurate which is why I always use a RS meter when I manually (which I always do) set the speaker gain/volume in the speaker setup menu. Many folks will set the speakers by ear and be 1-2dB's off of exact level. The meter allows one with care (I always use a quality photo tripod and other cautions) to get precise results down to the 0.5dB (or even close to 0.25dB with great care). This is much more precise than human ears. I usually don't trust the auto setup speaker processing in the Pre-Pro/AVR's. Yours is an interesting question and I'm not sure if my thoughts are any help. Maybe others can contribute.
|
|
KeithL
Administrator
Posts: 10,273
|
Post by KeithL on Dec 28, 2016 12:10:32 GMT -5
In reading over this thread, it occurred to me that there's one more thing which really needs to be noted....... And that is that it's CRITICAL to define EXACTLY the purpose of any test BEFORE devising the test strategy itself. The reason is that this will often determine both the appropriate test methodology and the way the results are interpreted. For example, if you're trying to determine "Whether most people can hear a difference between A and B", then you would want a reasonably large sample, with a reasonable variation in your sample population. However, if you're trying to determine "If there is an audible difference", then you want people who are specifically best at hearing tiny differences (so probably "critical and experienced listeners"). Likewise, if you're researching a new compression algorithm, it might be reasonable to use samples from popular music, and to avoid unusual test tones that never occur in nature. However, if you're testing "the limits of human perception", then it's perfectly reasonable to use strange test tones that would never occur in normal music, or perhaps recordings of shattering glass. One excellent example of this is the set of "impulse response pictures" often published when discussing DACs. Response to a single sharp impulse is an excellent way to visualize ringing response in filters. However, a single sharp impulse response virtually never occurs in nature, and is in fact not a valid signal for recorded digital audio. (Because digital audio MUST be band-limited before conversion, a single sharp square impulse CANNOT exist in an actual digital audio sample.) https://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.1116-1-199710-S!!PDF-E.pdf "As an example, if the actual sequence of audio items is identical for all the subjects in a listening test, then one could not be sure whether the judgements made by the subjects were due to that sequence rather than to the different levels of impairments that were presented." "Where non-homogeneity is expected this must be taken into account in the presentation of the test conditions." "A major consideration is the inclusion of appropriate control conditions." "It should be understood that the topics of experimental design, experimental execution, and statistical analysis are complex, and that only the most general guidelines can be given in a Recommendation such as this. It is recommended that professionals with expertise in experimental design and statistics should be consulted or brought in at the beginning of the planning for the listening test." "It is important that data from listening tests assessing small impairments in audio systems should come exclusively from subjects who have expertise in detecting these small impairments. The higher the quality reached by the systems to be tested, the more important it is to have expert listeners." Should I go on?
|
|
|
Post by yves on Dec 28, 2016 12:10:53 GMT -5
But in this context they are in fact related..... If you're doing a comparison between multiple items, then anything that skews preferences matters.... and that includes the order of presentation. HOWEVER, the situation is quite different when you're doing a study on whether something exists or not (or is audible or not). In this situation, there's no consideration of fairness, and bias is simply irrelevant, other than as a motivating factor. The result is an either/or answer. Either someone can hear a difference or NOBODY can hear a difference. Since you are essentially attempting to justify a null assumption (or fail to justify it), the most accurate method is the one that provides subjects the absolute best possible bias to succeed. (This is true because, unlike other forms of tests, a bias CANNOT skew positive results and make them less reliable, because we can statistically test a positive result; but a bias can skew a null result by raising doubt that a "best effort" was made to succeed. There is no form of bias that will allow someone to hear a difference that doesn't exist; at most, a bias could prevent them from noticing a difference that does exist, so the most accurate result will be obtained with the strongest possible bias in favor of a positive result. This will ensure that a null result is really due to a null condition.) Think of it like trying to determine the absolute fastest a human can run. While a LACK of motivation might cause the best runners not to show up, or might cause the participants to fail to "give their all", there is no possible bias that will force or encourage anyone to run faster than they actually can. Therefore, the most accurate result will be obtained if you provide the most motivation possible. An Olympic gold medal is great incentive for some people, and a million dollar prize is better for others, but you'll probably get an even better result if you release some man-eating lions behind the runners. In our example, since you are trying to test a human limit, the ability to distinguish a certain difference, and it's simple enough to rule out any false positives (because we can statistically determine whether someone has actually detected a difference or not), we will get the most accurate result by testing the widest variety of subjects, and giving them the most motivation to succeed. Since we cannot practically test every human on Earth, and we cannot rely on every participant being optimally motivated, the closest we can practically come is to provide sufficient motivation and allow some self-selection. If we offer a huge prize, we will ensure that people who believe they have an opportunity to win will show up, and will do their best to win the prize. While it's possible that we will miss a potential "winner" who lacks the willingness to compete, or the confidence to compete, we will attract most of the people who already have the bias of believing that they might win, avoid the people who have the opposite bias, and strongly motivate the ones who show up. And, with luck, the ones who think they might win will really be the ones with a reasonable likelihood of doing so. (This will give us the best practical chance for success. And, by starting with the best chance for success, will also provide us with the best claim that, if we fail, it's because the answer is really null. Back to my original analogy, if you want to claim to have found "the fastest person on Earth", within practical limits, the best way is to have an open contest, where anyone is welcome to compete, and give them both a strong motivation to compete, and a strong motivation to win. ) I agree entirely with you that the sequence of presentation tends to skew judgment.... however, in this case, we're not talking about judgment.... it's a simple yes or no (the difference is or is not audible). If you want to extend the test to "which one sounds better" then you've specified more complex test. However, you'll get better overall accuracy with less effort, if you separate the question "is there a difference at all", and answer it first, with a simpler yes/no test. (After all, if there is no difference, then any effort trying to figure out which is better will be wasted effort - and any result you get will be "statistical noise".) Alternately, if there is no actual difference, a study about what factors affect the differences that people IMAGINE are there would be most interesting... especially to the marketing department. That's not what I was referring to because my first quote from the ITU document is about the sequence of audio items that is heard by the test subject, not about motivating the test subject. In a sequence of audio items the order in which audio items are presented skews judgement of audio items, and the 2nd paragraph in my response to Chuckie is a logical explanation of that observation. The judgement I am referring to here is a simple yes or no judgement. In the specific case of attempting to find evidence in support of a null hypothesis, which is also what I am referring to here (i.e., I am NOT referring to disproving the null hypothesis here), it must be taken into account the fact that the sequence of audio items biases this judgement towards "no". In a large enough group of expert listeners, however, it is possible to make a best effort by using statistical analysis to identify, and therefore be able to eliminate, that particular bias from the analysis result, IF the sequences vary from listener to listener. The bigger the group of listeners, the more accurate statistical analysis becomes. So with only few listeners a best effort cannot be made to ensure that a null result is really due to a null condition, which was also my point. The ITU document confirms this. It also implies that with only a single participant no effort can be made to ensure this, let alone making a BEST effort! So no, you can't claim that the null hypothesis is true, reliably anyway, if you can't take into account the bias that skews judgements in favor of it. Further, for detecting small differences the critical importance of allowing ONLY expert listeners to be used is still something that simply cannot be denied: seanolive.blogspot.be/2008/12/part-2-differences-in-performances-of.html Again, the ITU document also confirms this. Real scientists are like real audiophiles, they're strapped for cash, and time is money so they usually don't want to waste much of their time with foolish experiments the null result of which is due to a fake null condition.
|
|
klinemj
Emo VIPs
Official Emofest Scribe
Posts: 15,098
|
Post by klinemj on Dec 28, 2016 13:33:48 GMT -5
Keith makes a very good point. If the objective is to determine if Mark can hear a difference, then Mark must be the test subject. If the objective is to determine if the average person can hear a difference, then a representative group of "average people" must be recruited and tested. If the objective were to determine if a claim could be made (such as "preferred by audiophiles", then a representative group of audiophiles must be recruited and tested.
Test design must align with test objective.
Mark
|
|
|
Post by thepcguy on Dec 28, 2016 13:45:25 GMT -5
Keith makes a very good point. If the objective is to determine if Mark can hear a difference, then Mark must be the test subject. If the objective is to determine if the average person can hear a difference, then a representative group of "average people" must be recruited and tested. If the objective were to determine if a claim could be made (such as "preferred by audiophiles", then a representative group of audiophiles must be recruited and tested. Test design must align with test objective. Mark Yes, it is that simple. If anyone can show the world he/she 'can hear wires', just one individual (no statistics needed), this debate will end. This thread is worthless too. We're just going in circles. Round and round we go.
|
|
KeithL
Administrator
Posts: 10,273
|
Post by KeithL on Dec 28, 2016 14:33:30 GMT -5
You need to remember that, when you're talking about a null hypothesis, no complex statistical analysis is required. If a single person can consistently and reliably hear a difference, in one specific condition, with one specific test sample, then the null hypothesis is proven false. If you cannot find a single situation where someone can consistently and reliably hear a difference, then you have failed to prove the null hypothesis false. My point, which I maintain, is that you can't PROVE the null hypothesis to be true WHETHER YOU TAKE THE BIAS INTO ACCOUNT OR NOT. It is simply impossible to ever prove a null hypothesis absolutely true (at best, you may claim to have proven it true to within some possible factor of error). Therefore, taking that bias into account does NOT make you any more able to deliver an absolute result.) You cannot prove that "nobody can hear a difference" unless you test every human being on Earth under every possible test condition. (And, since "every human being" includes people no longer alive, and people not born yet, you cannot possibly test every human being.) However, I would agree that, if you take biases into account, you can make a more reasonable claim to having "made a reasonable effort". HOWEVER, more to the point, if a specific individual claims to hear a difference, it's relatively trivial to test that single claim for validity. I also disagree with your assertion that, in the specific case of a yes/no proposition like this, it's "critical to only allow expert listeners". The simple reality is that it doesn't matter if a million inexpert listeners take the text, and none of them hears anything, as long as one single expert listener does and so proves our case. However, I definitely agree that we must be sure to include as many expert listeners as possible to maximize our chances of getting a positive result due to their better qualifications. That does, however, introduce the question about how we define "experts" and "optimum test subjects". For example, it is a known fact that humans lose acuity in their ability to detect high frequencies as they get older. Therefore, to pick an easy example, it's POSSIBLE that kindergarten children are more able to hear high frequency distortion than college-aged "audio experts". So, in that specific situation, kindergarten children might be more able to detect that difference than thirty year old musicians and acoustics experts. And, for that reason, we should be sure to include some of EACH in our test sample. Obviously I agree with you that any competent scientist is going to do their best to improve the odds of detecting whatever they're trying to prove. But we also have to do our best to avoid other types of bias... for example the bias a manufacturer of audio equipment might have to not bother to test kindergarten children (because they don't have credit cards and so probably won't be buying any of his equipment), or a desire to use members of the AES as test subjects (because they're already present at the test). The judgement I am referring to here is a simple yes or no judgement. In the specific case of attempting to find evidence in support of a null hypothesis, which is also what I am referring to here (i.e., I am NOT referring to disproving the null hypothesis here), it must be taken into account the fact that the sequence of audio items biases this judgement towards "no". In a large enough group of expert listeners, however, it is possible to make a best effort by using statistical analysis to identify, and therefore be able to eliminate, that particular bias from the analysis result, IF the sequences vary from listener to listener. The bigger the group of listeners, the more accurate statistical analysis becomes. So with only few listeners a best effort cannot be made to ensure that a null result is really due to a null condition, which was also my point. The ITU document confirms this. It also implies that with only a single participant no effort can be made to ensure this, let alone making a BEST effort! So no, you can't claim that the null hypothesis is true, reliably anyway, if you can't take into account the bias that skews judgements in favor of it. Further, for detecting small differences the critical importance of allowing ONLY expert listeners to be used is still something that simply cannot be denied: seanolive.blogspot.be/2008/12/part-2-differences-in-performances-of.html Again, the ITU document also confirms this. Real scientists are like real audiophiles, they're strapped for cash, and time is money so they usually don't want to waste much of their time with foolish experiments the null result of which is due to a fake null condition.
|
|
DYohn
Emo VIPs
Posts: 18,494
|
Post by DYohn on Dec 28, 2016 16:36:17 GMT -5
The biggest problem with generalizations is they are never 100% applicable. The only thing worse are absolute statements.
|
|
|
Post by Jim on Dec 28, 2016 16:49:58 GMT -5
Test design must align with test objective. Mark That should be emphasized!! Unless you want a misaligned test?
|
|
|
Post by yves on Dec 28, 2016 17:01:10 GMT -5
You need to remember that, when you're talking about a null hypothesis, no complex statistical analysis is required. If a single person can consistently and reliably hear a difference, in one specific condition, with one specific test sample, then the null hypothesis is proven false. If you cannot find a single situation where someone can consistently and reliably hear a difference, then you have failed to prove the null hypothesis false. My point, which I maintain, is that you can't PROVE the null hypothesis to be true WHETHER YOU TAKE THE BIAS INTO ACCOUNT OR NOT. It is simply impossible to ever prove a null hypothesis absolutely true (at best, you may claim to have proven it true to within some possible factor of error). Therefore, taking that bias into account does NOT make you any more able to deliver an absolute result.) You cannot prove that "nobody can hear a difference" unless you test every human being on Earth under every possible test condition. (And, since "every human being" includes people no longer alive, and people not born yet, you cannot possibly test every human being.) However, I would agree that, if you take biases into account, you can make a more reasonable claim to having "made a reasonable effort". HOWEVER, more to the point, if a specific individual claims to hear a difference, it's relatively trivial to test that single claim for validity. I also disagree with your assertion that, in the specific case of a yes/no proposition like this, it's "critical to only allow expert listeners". The simple reality is that it doesn't matter if a million inexpert listeners take the text, and none of them hears anything, as long as one single expert listener does and so proves our case. However, I definitely agree that we must be sure to include as many expert listeners as possible to maximize our chances of getting a positive result due to their better qualifications. That does, however, introduce the question about how we define "experts" and "optimum test subjects". For example, it is a known fact that humans lose acuity in their ability to detect high frequencies as they get older. Therefore, to pick an easy example, it's POSSIBLE that kindergarten children are more able to hear high frequency distortion than college-aged "audio experts". So, in that specific situation, kindergarten children might be more able to detect that difference than thirty year old musicians and acoustics experts. And, for that reason, we should be sure to include some of EACH in our test sample. Obviously I agree with you that any competent scientist is going to do their best to improve the odds of detecting whatever they're trying to prove. But we also have to do our best to avoid other types of bias... for example the bias a manufacturer of audio equipment might have to not bother to test kindergarten children (because they don't have credit cards and so probably won't be buying any of his equipment), or a desire to use members of the AES as test subjects (because they're already present at the test). The judgement I am referring to here is a simple yes or no judgement. In the specific case of attempting to find evidence in support of a null hypothesis, which is also what I am referring to here (i.e., I am NOT referring to disproving the null hypothesis here), it must be taken into account the fact that the sequence of audio items biases this judgement towards "no". In a large enough group of expert listeners, however, it is possible to make a best effort by using statistical analysis to identify, and therefore be able to eliminate, that particular bias from the analysis result, IF the sequences vary from listener to listener. The bigger the group of listeners, the more accurate statistical analysis becomes. So with only few listeners a best effort cannot be made to ensure that a null result is really due to a null condition, which was also my point. The ITU document confirms this. It also implies that with only a single participant no effort can be made to ensure this, let alone making a BEST effort! So no, you can't claim that the null hypothesis is true, reliably anyway, if you can't take into account the bias that skews judgements in favor of it. Further, for detecting small differences the critical importance of allowing ONLY expert listeners to be used is still something that simply cannot be denied: seanolive.blogspot.be/2008/12/part-2-differences-in-performances-of.html Again, the ITU document also confirms this. Real scientists are like real audiophiles, they're strapped for cash, and time is money so they usually don't want to waste much of their time with foolish experiments the null result of which is due to a fake null condition. While it is true that a null hypothesis cannot be PROVEN true, it is nonetheless crucial to make a best effort at attempting to find evidence in support of it, in order to ensure there were no advertent weaknesses present in our test that could lead to future criticism, and that we TRULY did our best effort to make the test as absolutely rigorous as we possibly could. Only then can we be sure our conclusion of "no difference was heard by our test subjects within the confines of the test, under those specific circumstances" is reliable enough to be used as a relevant source in later research. Section 6 in the ITU document goes further into this. The null condition never exceeds the bounds of the experiment, but our null result must still be representative for it to have any reasonable bearing outside its scope. The use of expert listeners is of critical importance because of the cost effectiveness involved. Wasting valuable resources clashes with our concept of "making our best effort". Testing the hearing acuity in test candidates is part of what makes it possible to broaden the horizon of our scope. It is where the cautious advice of professional experts in experimental design and project analysis comes into play.
|
|
|
Post by 405x5 on Dec 28, 2016 17:20:25 GMT -5
[quote author=" Boomzilla" such tests so seldom show any difference at all" Precisely why I consider them invaluable, particularly with regard to dispelling myths about wires. Billl
|
|
|
Post by novisnick on Dec 28, 2016 17:30:24 GMT -5
[quote author=" Boomzilla " such tests so seldom show any difference at all" Precisely why I consider them invaluable, particularly with regard to dispelling myths about wires. Billl "dispelling myths about wires" I have heard differences in wires!! Gact! And could tell you which ones the were! ! Sorry to pop your bubble 🎈! Respectfully, Im not getting into this pissing match! Thank you.
|
|
|
Post by garbulky on Dec 28, 2016 17:32:57 GMT -5
What tests have been run to show the extent that double blind level matched testing can be used to show an audible difference? How small does it go? Now I'm not talking about 0.5 db difference in volume. I'm talking about things like....this unit sounds clearer. Or this sounded less like there was a haze around it. Or this unit has a wider (perceived) soundstage. Or this unit sounds more natural in the treble. Now I get that all those things sound awfully subjective and not really quantifiable. But that is also what matters because that's what you hear. The images your brain builds in your head. Not this is 0.5 db louder.
So have there been any tests of this nature or anything vaguely resembling "finer detail"?
|
|
|
Post by yves on Dec 28, 2016 17:35:55 GMT -5
[quote author=" Boomzilla" such tests so seldom show any difference at all" Precisely why I consider them invaluable, particularly with regard to dispelling myths about wires. Billl Try using DBT to to dispell old myths about DBT instead.
|
|
|
Post by 405x5 on Dec 28, 2016 17:40:13 GMT -5
[quote author=" novisnick" I have heard differences in wires!! Gact! And could tell you which ones the were! " Me too.....but when I found the one that was unplugged the difference went away 😄🎶😄🎶! Bill
|
|
|
Post by novisnick on Dec 28, 2016 17:43:36 GMT -5
[quote author=" novisnick " I have heard differences in wires!! Gact! And could tell you which ones the were! " Me too.....but when I found the one that was unplugged the difference went away 😄🎶😄🎶! Bill
|
|
|
Post by 405x5 on Dec 28, 2016 17:45:15 GMT -5
Ahh....I thought you'd like that one
|
|
klinemj
Emo VIPs
Official Emofest Scribe
Posts: 15,098
|
Post by klinemj on Dec 28, 2016 18:50:35 GMT -5
The biggest problem with generalizations is they are never 100% applicable. The only thing worse are absolute statements. This is absolutely true. LMAO!!!! Mark
|
|
|
Post by sahmen on Dec 28, 2016 18:52:47 GMT -5
Is it me or does an audio enthusiast or audiophile, who enjoys arguing that "they all sound the same," actually sound like a walking oxymoron, a contradiction in terms? Just wondering
|
|
klinemj
Emo VIPs
Official Emofest Scribe
Posts: 15,098
|
Post by klinemj on Dec 28, 2016 18:53:37 GMT -5
What tests have been run to show the extent that double blind level matched testing can be used to show an audible difference? How small does it go? Now I'm not talking about 0.5 db difference in volume. I'm talking about things like....this unit sounds clearer. Or this sounded less like there was a haze around it. Or this unit has a wider (perceived) soundstage. Or this unit sounds more natural in the treble. Now I get that all those things sound awfully subjective and not really quantifiable. But that is also what matters because that's what you hear. The images your brain builds in your head. Not this is 0.5 db louder. So have there been any tests of this nature or anything vaguely resembling "finer detail"? I have never seen any data I would consider credible, but I have no doubt such a test could be designed and executed. But, most who would have an interest in seeing the data would not pay for it and those who market us equipment would mostly not want to share it if they had it. Mark
|
|