|
Post by ÈlTwo on Jan 29, 2016 18:15:00 GMT -5
People with a lot of $$$ will still buy this product regardless... You're no fun, you didn't write "irregardless."
|
|
|
Post by pedrocols on Jan 29, 2016 22:18:08 GMT -5
People with a lot of $$$ will still buy this product regardless... You're no fun, you didn't write "irregardless." This damn autocorrect on my cell phone I keep on forgetting to turn it off....lol
|
|
|
Post by lionear on Jan 30, 2016 1:20:55 GMT -5
KeithL This has been an interesting thread. If I understand you correctly, the idea is to take some DATA, and then add some META-DATA. If you detect an error in the DATA, then use the META-DATA to recover the DATA. I don't doubt that this can be done. But this makes an assumption that there is no error in the META-DATA. I think this is why the math indicates that digital errors are, ultimately, unavoidable. (And I think the only reason to use META-DATA, rather than send multiple copies of DATA, is that the size of the DATA and META-DATA will be less than two copies of DATA. This will put less load on the network.) When you copy a file from a CD to a hard disk, the hard disk can have the data read and re-read many times - and a computer will take its sweet time doing it. But different priorities are at play when the computer/player is instructed to send a file to the DAC. Nobody wants a CD player that will stop working if the data isn't perfect - they want one that will continue to work, even if there's some bad data here and there. So the emphasis will be on keeping the music going. When I said that it may be impractical to re-send data, I was thinking of things like the Super Bowl - when content is sent out to millions of TV's. And services like Netflix, iTunes and Xfinity are unlikely to entertain requests to re-send the data. I have no idea whether a TV can get DirectTV or Dish to re-send data. I should have been more clear about this. I think this will be my last post on this thread. But I will close with this: We put a lot of trust in digital data. But the situation changes when the job is very valuable and when lives are at stake. The Space Shuttle had 5 identical flight computers which were fed the same sensor data, etc. Computer #1 might say "Turn Right". If Computer #2 also says "Turn Right", then the controls would be moved so that the Shuttle would turn right. If Computer #1 says "Turn Right" and Computer #2 says "Turn Left", then the system will check with Computer #3. If Computer #3 says "Turn Right", then control surfaces would be moved to turn right. (Computers #4 and #5 were back-ups and would not be part of the decision-making process.) This is, of course, a pretty scary situation. But there was a final level of insurance: two pilots and Mission Control constantly checking everything, and being prepared to take over if the correct action was really to turn left.
|
|
|
Post by yves on Jan 30, 2016 6:08:51 GMT -5
KeithL This has been an interesting thread. If I understand you correctly, the idea is to take some DATA, and then add some META-DATA. If you detect an error in the DATA, then use the META-DATA to recover the DATA. I don't doubt that this can be done. But this makes an assumption that there is no error in the META-DATA. I think this is why the math indicates that digital errors are, ultimately, unavoidable. (And I think the only reason to use META-DATA, rather than send multiple copies of DATA, is that the size of the DATA and META-DATA will be less than two copies of DATA. This will put less load on the network.) When you copy a file from a CD to a hard disk, the hard disk can have the data read and re-read many times - and a computer will take its sweet time doing it. But different priorities are at play when the computer/player is instructed to send a file to the DAC. Nobody wants a CD player that will stop working if the data isn't perfect - they want one that will continue to work, even if there's some bad data here and there. So the emphasis will be on keeping the music going. When I said that it may be impractical to re-send data, I was thinking of things like the Super Bowl - when content is sent out to millions of TV's. And services like Netflix, iTunes and Xfinity are unlikely to entertain requests to re-send the data. I have no idea whether a TV can get DirectTV or Dish to re-send data. I should have been more clear about this. I think this will be my last post on this thread. But I will close with this: We put a lot of trust in digital data. But the situation changes when the job is very valuable and when lives are at stake. The Space Shuttle had 5 identical flight computers which were fed the same sensor data, etc. Computer #1 might say "Turn Right". If Computer #2 also says "Turn Right", then the controls would be moved so that the Shuttle would turn right. If Computer #1 says "Turn Right" and Computer #2 says "Turn Left", then the system will check with Computer #3. If Computer #3 says "Turn Right", then control surfaces would be moved to turn right. (Computers #4 and #5 were back-ups and would not be part of the decision-making process.) This is, of course, a pretty scary situation. But there was a final level of insurance: two pilots and Mission Control constantly checking everything, and being prepared to take over if the correct action was really to turn left. In the event that an error occurs in the CRC field (or "META-DATA", as you refer to it), due to how CRC works, this error will still be detected also. Granted, if multiple errors exist within the same packet, there is still a risk of these multiple errors going undetected. However, the whole design principle of CRC is based on the notion that, if data corruption is so severe that multiple errors occur within the same packet, chances will be extremely realistic that this severity will be persistent, i.e. this high level of corruption will likely repeat itself in rapid succession due to it being attributed to a serious hardware malfunction so, as a result, it won't be taking very long before this malfunction will finally reveal itself. Remember that normal interference isn't considered a malfunction. The fact that the error rate caused by it must be kept low enough for the CRC mechanism to be able to cope with it is an actual part of the design standard. The reliability of it has been extremely thoroughly stress-tested because else the whole computer industry would fall flat on its face, taking down the world economy along with it, creating irreversible chaos and universal cataclysm. www.usb.org/developers/docs/whitepapers/crcdes.pdf
|
|
|
Post by geebo on Feb 2, 2016 20:36:46 GMT -5
|
|
KeithL
Administrator
Posts: 10,274
|
Post by KeithL on Feb 3, 2016 15:14:04 GMT -5
A lot of this stuff goes even further than you might imagine.... and some of the information I'm seeing is somewhat dated.... (First, a minor correction.... the term "meta-data" is properly used to refer to "data about data".... as in the tags used in audio files to identify the song and artist, or the EXIF data in a photo that tells you when it was taken and what camera settings were used. It generally isn't used to refer to CRC's or error correction data.) In terms of making sure that your data reaches its destination without errors, there are basically two sorts of mechanisms.... In both cases, you have to check carefully for errors, then, in one case, you simply request that any damaged or missing data be re-sent, and, in the other, you repair or re-create the data using the good data you have plus some extra "recovery information" that you've included with the data. And, of course, a third "non-option" is to simply allow the error to go uncorrected - either by ignoring it entirely, or by making an approximate repair by using snippets of data from nearby. When you read a CD, whether an audio CD or a data CD, the data contains extra information which can be used both to verify that the data is perfect, and to correct errors that are detected (and so to DELIVER perfect data either way). Trying to circumvent a serious error by rereading the data multiple times used to be popular in the days of floppy drives and tape backups, but isn't used very much lately; it really only worked well with mechanical errors, like a read head not making good contact with the disc surface, or a bit of dirt on the disc. CDs actually have two different levels of perfect error correction; together, they can perfectly repair a gap up to 2.5mm in a data track. (This system uses calculations performed on the data that is read; nothing is re-read or read multiple times.) The system used allows very reliable detection of errors, whether multiple or single, and allows errors totaling up to up to 5% or 10% of the total amount of data to be corrected (the actual amount varies depending on the types of errors, and where they are located). One elegant aspect of how this is done is that the correction information is not simply a duplicate copy of any data; it is extra information which can be used in conjunction with the remaining good data to reconstruct errors anywhere (so, if you have 5% correction data, you can use it to fix errors anywhere on the disc, as long as they total less than 5% of the total amount of data). At this point, if the repair fails to produce a perfect result, most computer CD drives will "fault"; most audio CD players have a third level of "interpolation", where a gap larger than what can be corrected will be filled in with content from nearby (at which point it is no longer perfect). However, one fallacy is that "digital errors are inevitable" - at least in the context of "delivered data". While it may in fact be true that the "raw data" contained on the average DVD probably contains several thousand errors on a typical disc, the multiple layers of error detection and correction STILL act together to ensure that those errors will be corrected before the "final data" is delivered to us. It doesn't matter if there are a thousand errors on that disc because, by the time the error detection and correction has finished with the data, the likelihood of any of those errors remaining uncorrected is minute (we're talking about reading a billion typed pages a billion times and maybe - but just maybe - seeing one single wrong letter). When you look at things like redundant flight computers, you're looking at an entirely different situation. Rather than reading information from a disc or other storage medium, where we can verify our data against an original, a flight computer GENERATES data. This is new data, and so there's nothing to compare it to, and what we're confirming is that the entire process used to generate that data is correct - which is a lot more complicated. (I suppose it's up to you whether you think it's "more scary" to trust a machine to get this sort of thing right, or a human being. After all, we KNOW that humans make mistakes.) Incidentally, there are in fact ways to make machines act more like humans - which, in this case, means to make them less accurate in return for making them less likely to make BIG mistakes. However, since we value the accuracy and speed of computers, it's easier to have three of them double-check each other. (In many modern fighter planes, different computers, running different software, are used. That way, even if one of the programs has a bug, the other programs, using different programs, written by different people, to get the same result, are far less likely to get the same wrong result if something goes wrong.) However, overall, it's incorrect to generalize about errors that occur in one place or another. A basic Ethernet network connection, under heavy load, will typically see as much as 10% or 20% data loss; however, it also has multiple levels of very effective error detection and correction, which serve to "convert the data loss into a loss of performance". (In other words, the number of allowable errors at the top level is fixed at zero; when you get more errors at the bottom level, it requires more re-sends to correct them, so the overall throughput goes down, but the number of errors at the top level remains zero. And, yes, the error detection and correction really is good enough to be "effectively zero".) In the case of USB, USB is simply not a great choice for transmitting audio. (I can copy data over a USB connection and NEVER see a SINGLE error that hasn't been perfectly corrected. It is the priority which USB AUDIO places on real-time data delivery which makes it inferior in this regard. My USB drive doesn't mind waiting for a packet to be re-sent if it turns out to be bad; most DACs risk an audible tick or pop if they do so.) However, I can reduce the likelihood of this to a minimum by proper system design (fast computer, dedicated fast USB port, not having other programs running). KeithL This has been an interesting thread. If I understand you correctly, the idea is to take some DATA, and then add some META-DATA. If you detect an error in the DATA, then use the META-DATA to recover the DATA. I don't doubt that this can be done. But this makes an assumption that there is no error in the META-DATA. I think this is why the math indicates that digital errors are, ultimately, unavoidable. (And I think the only reason to use META-DATA, rather than send multiple copies of DATA, is that the size of the DATA and META-DATA will be less than two copies of DATA. This will put less load on the network.) When you copy a file from a CD to a hard disk, the hard disk can have the data read and re-read many times - and a computer will take its sweet time doing it. But different priorities are at play when the computer/player is instructed to send a file to the DAC. Nobody wants a CD player that will stop working if the data isn't perfect - they want one that will continue to work, even if there's some bad data here and there. So the emphasis will be on keeping the music going. When I said that it may be impractical to re-send data, I was thinking of things like the Super Bowl - when content is sent out to millions of TV's. And services like Netflix, iTunes and Xfinity are unlikely to entertain requests to re-send the data. I have no idea whether a TV can get DirectTV or Dish to re-send data. I should have been more clear about this. I think this will be my last post on this thread. But I will close with this: We put a lot of trust in digital data. But the situation changes when the job is very valuable and when lives are at stake. The Space Shuttle had 5 identical flight computers which were fed the same sensor data, etc. Computer #1 might say "Turn Right". If Computer #2 also says "Turn Right", then the controls would be moved so that the Shuttle would turn right. If Computer #1 says "Turn Right" and Computer #2 says "Turn Left", then the system will check with Computer #3. If Computer #3 says "Turn Right", then control surfaces would be moved to turn right. (Computers #4 and #5 were back-ups and would not be part of the decision-making process.) This is, of course, a pretty scary situation. But there was a final level of insurance: two pilots and Mission Control constantly checking everything, and being prepared to take over if the correct action was really to turn left. In the event that an error occurs in the CRC field (or "META-DATA", as you refer to it), due to how CRC works, this error will still be detected also. Granted, if multiple errors exist within the same packet, there is still a risk of these multiple errors going undetected. However, the whole design principle of CRC is based on the notion that, if data corruption is so severe that multiple errors occur within the same packet, chances will be extremely realistic that this severity will be persistent, i.e. this high level of corruption will likely repeat itself in rapid succession due to it being attributed to a serious hardware malfunction so, as a result, it won't be taking very long before this malfunction will finally reveal itself. Remember that normal interference isn't considered a malfunction. The fact that the error rate caused by it must be kept low enough for the CRC mechanism to be able to cope with it is an actual part of the design standard. The reliability of it has been extremely thoroughly stress-tested because else the whole computer industry would fall flat on its face, taking down the world economy along with it, creating irreversible chaos and universal cataclysm. www.usb.org/developers/docs/whitepapers/crcdes.pdf
|
|
|
Post by pedrocols on Feb 3, 2016 15:30:46 GMT -5
The funny thing about all this is that they have been ripping people off all along. I don't understand why people are so upset about something that is not a new discovery....It is pretty much when your girlfriend is cheating on you and when you see her with another man or woman you be like "I knew it I knew it"
|
|
|
Post by geebo on Feb 3, 2016 15:51:20 GMT -5
The funny thing about all this is that they have been ripping people off all along. I don't understand why people are so upset about something that is not a new discovery....It is pretty much when your girlfriend is cheating on you and when you see her with another man or woman you be like "I knew it I knew it" So who's upset?
|
|
|
Post by pedrocols on Feb 3, 2016 16:02:40 GMT -5
The funny thing about all this is that they have been ripping people off all along. I don't understand why people are so upset about something that is not a new discovery....It is pretty much when your girlfriend is cheating on you and when you see her with another man or woman you be like "I knew it I knew it" So who's upset? That is a very good question.
|
|
|
|
Post by vneal on Feb 3, 2016 20:56:55 GMT -5
So most of you are making a buying decision not based on what your hear. You are ripping your system off not your pocketbook. All the Audio Magazines are wrong right?
|
|
|
Post by pedrocols on Feb 3, 2016 21:18:41 GMT -5
People who are deceived don't know they are deceived because they are deceived!! This gave me flash backs! My dad used to tell me that when I was five years old...
|
|
KeithL
Administrator
Posts: 10,274
|
Post by KeithL on Feb 4, 2016 12:21:45 GMT -5
The bad thing is making a decision based on what you THINK you hear, or IMAGINE you hear, rather than on what's really there to hear. (Remember, you hear with your brain, not your ears - and, sadly, our human brains are rather easy to fool.)
So most of you are making a buying decision not based on what your hear. You are ripping your system off not your pocketbook. All the Audio Magazines are wrong right?
|
|
|
Post by pedrocols on Feb 4, 2016 13:14:15 GMT -5
The bad thing is making a decision based on what you THINK you hear, or IMAGINE you hear, rather than on what's really there to hear. (Remember, you hear with your brain, not your ears - and, sadly, our human brains are rather easy to fool.)
View AttachmentSo most of you are making a buying decision not based on what your hear. You are ripping your system off not your pocketbook. All the Audio Magazines are wrong right? I see a smiley face...
|
|
|
Post by Boomzilla on Feb 4, 2016 13:52:07 GMT -5
The funny thing about all this is that they have been ripping people off all along. I don't understand why people are so upset about something that is not a new discovery....It is pretty much when your girlfriend is cheating on you and when you see her with another man or woman you be like "I knew it I knew it" My girlfriend is cheating with another girl? I want to be invited! ...but your point is taken - Take ALL advertising with a grain of salt. Anyone trying to sell you something is NOT "a neutral advisor." The old saying "figures lie and liars figure" is also pertinent. But to give the advertising boys their due, which would YOU use if you wanted to sell something - data that makes your product look good or data that doesn't? I don't think that Audioquest is any more a fraud than a wide variety of other companies (in audio or elsewhere). I've read more audio advertising mumbo-jumbo than I can remember and have spent a fair amount of time laughing at it. My audio hero, Mr. Paul Wilbur Klipsch, used to attend audio shows with a big button pinned to the inside of his jacket. When he heard some outrageous advertising claim (which happened frequently at audio shows), he'd flip open his jacket to reveal the big, yellow "BULLSHIT!" button. So use common sense when evaluating ANY advertising claim. DON'T base buying decisions of unauditioned equipment based solely on specifications. Trust your ears. And remember the final bit of rustic homily: A fool and his money are soon parted. Boomzalala
|
|
|
Post by yves on Feb 4, 2016 16:09:50 GMT -5
A lot of this stuff goes even further than you might imagine.... and some of the information I'm seeing is somewhat dated.... (First, a minor correction.... the term "meta-data" is properly used to refer to "data about data".... as in the tags used in audio files to identify the song and artist, or the EXIF data in a photo that tells you when it was taken and what camera settings were used. It generally isn't used to refer to CRC's or error correction data.) In terms of making sure that your data reaches its destination without errors, there are basically two sorts of mechanisms.... In both cases, you have to check carefully for errors, then, in one case, you simply request that any damaged or missing data be re-sent, and, in the other, you repair or re-create the data using the good data you have plus some extra "recovery information" that you've included with the data. And, of course, a third "non-option" is to simply allow the error to go uncorrected - either by ignoring it entirely, or by making an approximate repair by using snippets of data from nearby. When you read a CD, whether an audio CD or a data CD, the data contains extra information which can be used both to verify that the data is perfect, and to correct errors that are detected (and so to DELIVER perfect data either way). Trying to circumvent a serious error by rereading the data multiple times used to be popular in the days of floppy drives and tape backups, but isn't used very much lately; it really only worked well with mechanical errors, like a read head not making good contact with the disc surface, or a bit of dirt on the disc. CDs actually have two different levels of perfect error correction; together, they can perfectly repair a gap up to 2.5mm in a data track. (This system uses calculations performed on the data that is read; nothing is re-read or read multiple times.) The system used allows very reliable detection of errors, whether multiple or single, and allows errors totaling up to up to 5% or 10% of the total amount of data to be corrected (the actual amount varies depending on the types of errors, and where they are located). One elegant aspect of how this is done is that the correction information is not simply a duplicate copy of any data; it is extra information which can be used in conjunction with the remaining good data to reconstruct errors anywhere (so, if you have 5% correction data, you can use it to fix errors anywhere on the disc, as long as they total less than 5% of the total amount of data). At this point, if the repair fails to produce a perfect result, most computer CD drives will "fault"; most audio CD players have a third level of "interpolation", where a gap larger than what can be corrected will be filled in with content from nearby (at which point it is no longer perfect). However, one fallacy is that "digital errors are inevitable" - at least in the context of "delivered data". While it may in fact be true that the "raw data" contained on the average DVD probably contains several thousand errors on a typical disc, the multiple layers of error detection and correction STILL act together to ensure that those errors will be corrected before the "final data" is delivered to us. It doesn't matter if there are a thousand errors on that disc because, by the time the error detection and correction has finished with the data, the likelihood of any of those errors remaining uncorrected is minute (we're talking about reading a billion typed pages a billion times and maybe - but just maybe - seeing one single wrong letter). When you look at things like redundant flight computers, you're looking at an entirely different situation. Rather than reading information from a disc or other storage medium, where we can verify our data against an original, a flight computer GENERATES data. This is new data, and so there's nothing to compare it to, and what we're confirming is that the entire process used to generate that data is correct - which is a lot more complicated. (I suppose it's up to you whether you think it's "more scary" to trust a machine to get this sort of thing right, or a human being. After all, we KNOW that humans make mistakes.) Incidentally, there are in fact ways to make machines act more like humans - which, in this case, means to make them less accurate in return for making them less likely to make BIG mistakes. However, since we value the accuracy and speed of computers, it's easier to have three of them double-check each other. (In many modern fighter planes, different computers, running different software, are used. That way, even if one of the programs has a bug, the other programs, using different programs, written by different people, to get the same result, are far less likely to get the same wrong result if something goes wrong.) However, overall, it's incorrect to generalize about errors that occur in one place or another. A basic Ethernet network connection, under heavy load, will typically see as much as 10% or 20% data loss; however, it also has multiple levels of very effective error detection and correction, which serve to "convert the data loss into a loss of performance". (In other words, the number of allowable errors at the top level is fixed at zero; when you get more errors at the bottom level, it requires more re-sends to correct them, so the overall throughput goes down, but the number of errors at the top level remains zero. And, yes, the error detection and correction really is good enough to be "effectively zero".) In the case of USB, USB is simply not a great choice for transmitting audio. (I can copy data over a USB connection and NEVER see a SINGLE error that hasn't been perfectly corrected. It is the priority which USB AUDIO places on real-time data delivery which makes it inferior in this regard. My USB drive doesn't mind waiting for a packet to be re-sent if it turns out to be bad; most DACs risk an audible tick or pop if they do so.) However, I can reduce the likelihood of this to a minimum by proper system design (fast computer, dedicated fast USB port, not having other programs running). In the event that an error occurs in the CRC field (or "META-DATA", as you refer to it), due to how CRC works, this error will still be detected also. Granted, if multiple errors exist within the same packet, there is still a risk of these multiple errors going undetected. However, the whole design principle of CRC is based on the notion that, if data corruption is so severe that multiple errors occur within the same packet, chances will be extremely realistic that this severity will be persistent, i.e. this high level of corruption will likely repeat itself in rapid succession due to it being attributed to a serious hardware malfunction so, as a result, it won't be taking very long before this malfunction will finally reveal itself. Remember that normal interference isn't considered a malfunction. The fact that the error rate caused by it must be kept low enough for the CRC mechanism to be able to cope with it is an actual part of the design standard. The reliability of it has been extremely thoroughly stress-tested because else the whole computer industry would fall flat on its face, taking down the world economy along with it, creating irreversible chaos and universal cataclysm. www.usb.org/developers/docs/whitepapers/crcdes.pdfRegarding "USB is simply not a great choice for transmitting audio". That depends. Firstly, S/PDIF (and AES/EBU) do not allow corrupted data to be re-transmitted. Asynchronous USB re-transmits it, and does so by design. Granted, this in and of itself does not guarantee great audio transmission. USB is not better by design. But it sure can be superior by implementation. The article linked below offers a few excellent explanations why. www.thewelltemperedcomputer.com/KB/USB.htmlSecondly, a well engineered asynchronous USB 2.0 input interface in an external DAC unit can have a very low latency of only microseconds, and there are no dropouts. On top of that, real-time data delivery is not a requirement for listening to music files at home. I still own an almost 6 years old, cheap, and extremely very slow-performing netbook PC (Intel Atom N270 @ 1.60 GHz with ony 1 GB RAM), running on Windows 7 Starter. Playing 24/192 FLAC files through asynchronous USB 2.0 with foobar2000, it does not cause any dropouts, tick sounds, or anything of the sort. (Even, if there are multiple graphical visualisations enabled in foobar2000). The only reasons why nowadays I play my music files on a more powerful, more modern notebook PC (Intel Core i7 4510U @ 3.20 GHz with 8 GB RAM) are practical convenience and the fact it makes no audible difference to me because the XMOS based USB input interface of my DAC is actually quite the very opposite of "not a great choice for transmitting audio". So much so, this is pretty much exactly why I have chosen it anyway in the first place.
|
|
|
Post by Boomzilla on Feb 4, 2016 16:26:20 GMT -5
"Fancy" DACs (and by "Fancy," I mean EXPENSIVE) use USB. I'm suspecting it isn't because it sounds worse...
|
|
|
Post by vneal on Feb 4, 2016 16:51:40 GMT -5
The speaker topic seems to generate the most unbelievers over any other topic.I know alot of people are cable skeptics and I understand where you are coming from. I do have some decent products in my own system and until I bought them I was a skeptic as well. Have I spent thousands? no... However I do use the Kimber Kable 12TC bi wired for speaker cables and mostly Kimber Kable Heros for interconnects along with a few Emotiva cables and I use a few Audioquest HDMI cables. Midline products, but still much better than average cables. With digital yes, if the cable meets the HDMI standard it will work and pass audio and video (otherwise the HDCP handshake fails) , but with a slightly better cable, particularly over long runs there is less likelihood for errors. Errors can cause missing pixels before they cause a drop in handshake as the "jitter correction" on the HDMI chipset makes up the lost data.
So if all cables are the same by all means use rusty lamp cord
|
|
|
Post by Boomzilla on Feb 4, 2016 17:03:09 GMT -5
I can hear differences in some cables. The two that I find to have the strongest audible "signature" are Kimber & Nordost (and in opposite directions). Are the differences worth lots of $$? Not to me. But they aren't (for the most part) ridiculously expensive, either.
The one thing that I WILL mention is that there are lots (and Lots and LOTS) of Chinese copies floating around on flea-bay and elsewhere selling for a fraction of what the genuine wires sell for & not providing a fraction of the quality. Be really cautious when buying "bargain priced" Kimber (in particular) wires from individuals or on eBay.
|
|
|
Post by geebo on Feb 4, 2016 17:03:44 GMT -5
"Fancy" DACs (and by "Fancy," I mean EXPENSIVE) use USB. I'm suspecting it isn't because it sounds worse... Nor does it mean it sounds better than SPDIF which the expensive DACs also have...
|
|