|
Post by MusicHead on Jan 29, 2016 7:25:31 GMT -5
There was a very interesting test comparing HDMI cables of several brands in all lengths and price ranges The guys sent the same video through 2 different cables into video acquisition cards (or dual input card can't remember that point), then used a real time video filter to perform a difference between the two streams. Any strictly identical pixel would be white, any different pixel (of any shade of difference) would be bright red. Guess what : whatever they compared, all result images were completely white. In all the test in real conditions they saw only ONE red pixel, in ONE image... that can be attributed to normal error. They only started seeing errors when they chained the cables to add to over 7m of length, that is outside of HDMI specs (or was at the time, this was pre-HD). Since then my HDMI cable budget was never a problem for me again! Can't find that link right now, maybe somebody else will remember? I think I posted it here in the past already www.expertreviews.co.uk/tvs-entertainment/7976/expensive-hdmi-cables-make-no-difference-the-absolute-proof/page/0/2And a follow-up article here: www.expertreviews.co.uk/tvs-entertainment/7976/expensive-hdmi-cables-make-no-difference-the-absolute-proof
|
|
|
Post by vneal on Jan 29, 2016 7:43:39 GMT -5
Sometimes it is a matter of scale. A high end cable manufacturer using exotic materials claiming to improve their product is really no different than a speaker manufacturer using exotic materials for their cabinets or a component manufacturer using a nicer name plate. We speak loudly with how we spend our dollars. If there is no perceived value added don't purchase that product or model and it will go away
|
|
|
Post by lionear on Jan 29, 2016 11:30:34 GMT -5
There was a very interesting test comparing HDMI cables of several brands in all lengths and price ranges The guys sent the same video through 2 different cables into video acquisition cards (or dual input card can't remember that point), then used a real time video filter to perform a difference between the two streams. Any strictly identical pixel would be white, any different pixel (of any shade of difference) would be bright red. Guess what : whatever they compared, all result images were completely white. In all the test in real conditions they saw only ONE red pixel, in ONE image... that can be attributed to normal error. They only started seeing errors when they chained the cables to add to over 7m of length, that is outside of HDMI specs (or was at the time, this was pre-HD). Since then my HDMI cable budget was never a problem for me again! Can't find that link right now, maybe somebody else will remember? I think I posted it here in the past already www.expertreviews.co.uk/tvs-entertainment/7976/expensive-hdmi-cables-make-no-difference-the-absolute-proof/page/0/2And a follow-up article here: www.expertreviews.co.uk/tvs-entertainment/7976/expensive-hdmi-cables-make-no-difference-the-absolute-proofThese are terrible articles! It's common for the layman to think of a digital signal as consisting of zero's and one's. But the zero's and one's are an abstracted level - a "logic" level that does not exist in reality. In reality, a digital signal is a square wave analog signal. It has a lot of imperfections - a finite rise time, an overshoot, time taken for the voltage to settle, a finite slew rate. There will also be variations in amplitude, errors with timing (jitter). At the dawn of digital transmissions (1950's), engineers were puzzled that digital signal transmissions were prone to error. This cannot be cleared up by slowing the signal down, increasing the voltage, etc. The math modeling of digital signals showed that it is impossible to avoid this error. You cannot avoid noise in an analog signal, and you cannot avoid error in a digital signal. There's no free lunch. This led engineers to take a different approach: do error checking, and discard the "words" that fail the error check, and request that it be resent. This is, for example, what happens with the TCP/IP protocol: the data is in the IP portion. But you also send additional info in the TCP portion that allows the receiver to check the integrity of the data. There are other protocols - but all need a way to check the data. When it comes to music and video, it's not practical for the data be resent. The network will get clogged, or the device cannot go back and re-read the data (because the CD-ROM drive is not designed to read the data that way). Even if you store the data on a hard disk and go back and read the data, you're only postponing the issue. The error can never go to zero, no matter what you do. So a CD player will replay the previous "word". So what might be "ABCD" at the source will go into the converter circuitry as "ABBD" if there's an error with "C". Older HDTV's will do the same thing. More modern ones may try to guess the value. When it comes to HDMI, there are two different things going on. The sending component and receiving component will establish a "lock", and if this lock is lost, then there won't be any data transfer. But even after the lock has been established, the data transfer activity is going to be prone to error (see above). There's no way to avoid it. Is it possible that a "better" cable can make a difference? Yes - because sending a square wave analog signal can be affected by the wiring. High frequency signals can do strange things. Intel discovered that, once bus speeds reached a certain level, the PCB tracks on computers became vulnerable to interference, and they had to hire engineers who knew about microwaves, and do a lot of work on the PCB tracks. This is "old school" analog engineering - and it would be a mistake to ignore this stuff just because we're talking about a computer. What was the bus speed that started to give Intel trouble? 50kHz - the CD sampling rate. BTW: I've always wondered why the HDMI interface establishes the lock. I suspect it has more to do with copy protection strategies than data integrity. The first phase is to lull us into accepting HDMI as "technically superior". The second step is to activate restrictions.
|
|
|
Post by geebo on Jan 29, 2016 11:55:47 GMT -5
These are terrible articles! It's common for the layman to think of a digital signal as consisting of zero's and one's. But the zero's and one's are an abstracted level - a "logic" level that does not exist in reality. In reality, a digital signal is a square wave analog signal. It has a lot of imperfections - a finite rise time, an overshoot, time taken for the voltage to settle, a finite slew rate. There will also be variations in amplitude, errors with timing (jitter). At the dawn of digital transmissions (1950's), engineers were puzzled that digital signal transmissions were prone to error. This cannot be cleared up by slowing the signal down, increasing the voltage, etc. The math modeling of digital signals showed that it is impossible to avoid this error. You cannot avoid noise in an analog signal, and you cannot avoid error in a digital signal. There's no free lunch. This led engineers to take a different approach: do error checking, and discard the "words" that fail the error check, and request that it be resent. This is, for example, what happens with the TCP/IP protocol: the data is in the IP portion. But you also send additional info in the TCP portion that allows the receiver to check the integrity of the data. There are other protocols - but all need a way to check the data. When it comes to music and video, it's not practical for the data be resent. The network will get clogged, or the device cannot go back and re-read the data (because the CD-ROM drive is not designed to read the data that way). Even if you store the data on a hard disk and go back and read the data, you're only postponing the issue. The error can never go to zero, no matter what you do. So a CD player will replay the previous "word". So what might be "ABCD" at the source will go into the converter circuitry as "ABBD" if there's an error with "C". Older HDTV's will do the same thing. More modern ones may try to guess the value. When it comes to HDMI, there are two different things going on. The sending component and receiving component will establish a "lock", and if this lock is lost, then there won't be any data transfer. But even after the lock has been established, the data transfer activity is going to be prone to error (see above). There's no way to avoid it. Is it possible that a "better" cable can make a difference? Yes - because sending a square wave analog signal can be affected by the wiring. High frequency signals can do strange things. Intel discovered that, once bus speeds reached a certain level, the PCB tracks on computers became vulnerable to interference, and they had to hire engineers who knew about microwaves, and do a lot of work on the PCB tracks. This is "old school" analog engineering - and it would be a mistake to ignore this stuff just because we're talking about a computer. What was the bus speed that started to give Intel trouble? 50kHz - the CD sampling rate. BTW: I've always wondered why the HDMI interface establishes the lock. I suspect it has more to do with copy protection strategies than data integrity. The first phase is to lull us into accepting HDMI as "technically superior". The second step is to activate restrictions. But we're talking about a 10dB difference in output levels. No way.
|
|
|
Post by MusicHead on Jan 29, 2016 12:19:16 GMT -5
lionear, you are of course correct, Digital "ones" and "zeros" are actually voltage levels traveling in a cable. However, in a pure analog transmission the voltage, shape, rise time, etc. of the signal ARE the information. Since an analog signal, by definition, has infinite components, even a minuscule change can be considered a distortion (setting aside for a moment whether that can be audible or visible). On the other hand, in a digital transmission, an analog signal is encoded with the information, it is not the information itself. While the square/rectangular analog signal will for sure be distorted by the cable/wire, it has to get REALLY distorted before the receiver on the other end of the line loses the ability to decode the information. That is why for example Ethernet cable runs are generally limited to 100 meters and for HDMI you have repeiter chips, to "regenerate" the signal. There is no question that a cable can be worst than another, but whether in the analog or digital domain, to have an audible or visible impact on the signal, a cable would have to be of really abysmal quality, extremely long or just not the right cable for the application, like not spec'd to handle a high data rate, for example. I think the point of the article is that you do not have to spend a fortune on a boutique cable to minimize impact on signal integrity. It was not pretending to be a IEEE level paper on digital transmission. They set up what seems to be a well thought test system and presented the results. There is always something that can be argued about methodology, equipment, expectations and interpretations of results. That is what makes our hobby so fun .
|
|
|
Post by copperpipe on Jan 29, 2016 12:27:31 GMT -5
The difference between ethernet and analog is that the protocol on top of ethernet handles the errors. You're definitely gettting ethernet errors, but the tcp/ip protocol is built to detect the errors, and ask the sender to try again when the errors occur. Therefore it's guaranteed that what you send down the wire is what you get. The only way you might notice errors is a slight reduction in throughput. If you're just sending a voltage signal down the wire and not testing for errors or correcting them, then proper shielding (and other properties) are much more crucial. I've often asked engineers why nobody has designed a DAC that works this way. We have USB harddrives that don't lose data, we don't have jitter problems with usb disks, why can't someone design a DAC that does this? I would imagine the driver / software side of things would change signifantly, but still. Here's why: If your computer wants a Microsoft Word file from the hard disk, and it gets an error, it will ask the hard disk to send the data again. There's no problem to the hard disk doing extra work, and for you to wait a little longer for the file to open. The same thing will happen if your computer called for a web page. The actual file has very little data, when compared to a music or video file. With music and video, the computer has to work differently because it needs to keep everything flowing. It's up to the receiver to figure out what to do with the errors. The Red Book standard plays the previous "word" again so what started out as "ABCD" will get into the DAC as "ABBD" if there was an error with "C". More modern systems, with greater computing power, will guess the value. For example, if the previous word was "White" and the next word after the error is "Black", then it's reasonable for a modern TV to guess that the error value is "50% grey". (The HDMI cable can maintain lock, and still give you an error in the data.) This is the only way to deal with this because very robust math shows that the error can NEVER be zero. Re-sending data can be impractical: the broadcast control booth at the Super Bowl will get overwhelmed by requests to resend data because the game is going out to many users. The servers on YouTube, Netflix, iTunes, etc. will also have to see a minimal load from each user - and the best way to do that is to not have any requests to resend data. But when you re-read the data, the uncertainty can only be reduced, not eliminated. If you read some data four times, and got four A's, then it's a good bet that the value is A. If you got three A's and one B, then it's reasonable to conclude that it's an A and not a B. If you got two A's and two B's, then what do you do? You have to make a choice between A and B, or play the previous value again. Since there's doubt about the value, the safest strategy can be to play the previous value again. So we're back to square one. It is only when the number of errors exceeds a threshold that the error is considered fatal. The CD player will skip to the next track. Or the image will get pixelated for a few moments. Or the TV will play the video at lower resolution and then switch back to high resolution when it can. That's all kind of interesting, but it's not really what I'm talking about; I'm just talking about a USB DAC. Clearly the USB protocol, despite it's jitter and other issues, can be worked "on top of" to support bit perfect file transfers from a local computer to USB attached harddrive. Why can a DAC engineer not design a DAC this way? Yes at that point it's no longer a standard USB DAC that will work windows / mac's built-in usb audio support, but I'm sure it's possible to write drivers to handle it all.
|
|
|
Post by yves on Jan 29, 2016 13:15:36 GMT -5
Here's why: If your computer wants a Microsoft Word file from the hard disk, and it gets an error, it will ask the hard disk to send the data again. There's no problem to the hard disk doing extra work, and for you to wait a little longer for the file to open. The same thing will happen if your computer called for a web page. The actual file has very little data, when compared to a music or video file. With music and video, the computer has to work differently because it needs to keep everything flowing. It's up to the receiver to figure out what to do with the errors. The Red Book standard plays the previous "word" again so what started out as "ABCD" will get into the DAC as "ABBD" if there was an error with "C". More modern systems, with greater computing power, will guess the value. For example, if the previous word was "White" and the next word after the error is "Black", then it's reasonable for a modern TV to guess that the error value is "50% grey". (The HDMI cable can maintain lock, and still give you an error in the data.) This is the only way to deal with this because very robust math shows that the error can NEVER be zero. Re-sending data can be impractical: the broadcast control booth at the Super Bowl will get overwhelmed by requests to resend data because the game is going out to many users. The servers on YouTube, Netflix, iTunes, etc. will also have to see a minimal load from each user - and the best way to do that is to not have any requests to resend data. But when you re-read the data, the uncertainty can only be reduced, not eliminated. If you read some data four times, and got four A's, then it's a good bet that the value is A. If you got three A's and one B, then it's reasonable to conclude that it's an A and not a B. If you got two A's and two B's, then what do you do? You have to make a choice between A and B, or play the previous value again. Since there's doubt about the value, the safest strategy can be to play the previous value again. So we're back to square one. It is only when the number of errors exceeds a threshold that the error is considered fatal. The CD player will skip to the next track. Or the image will get pixelated for a few moments. Or the TV will play the video at lower resolution and then switch back to high resolution when it can. That's all kind of interesting, but it's not really what I'm talking about; I'm just talking about a USB DAC. Clearly the USB protocol, despite it's jitter and other issues, can be worked "on top of" to support bit perfect file transfers from a local computer to USB attached harddrive. Why can a DAC engineer not design a DAC this way? Yes at that point it's no longer a standard USB DAC that will work windows / mac's built-in usb audio support, but I'm sure it's possible to write drivers to handle it all. Like I already said earlier in the thread, your assertion that engineers haven't already designed USB DACs this way is completely false. Examples of companies that have pioneered this particular USB audio input technology include M2Tech, Wavelength, and dCS. The Emotiva Pro Stealth DC-1 uses this same type of USB input; it's called asynchronous USB.
|
|
|
Post by lionear on Jan 29, 2016 14:22:49 GMT -5
Here's why: If your computer wants a Microsoft Word file from the hard disk, and it gets an error, it will ask the hard disk to send the data again. There's no problem to the hard disk doing extra work, and for you to wait a little longer for the file to open. The same thing will happen if your computer called for a web page. The actual file has very little data, when compared to a music or video file. With music and video, the computer has to work differently because it needs to keep everything flowing. It's up to the receiver to figure out what to do with the errors. The Red Book standard plays the previous "word" again so what started out as "ABCD" will get into the DAC as "ABBD" if there was an error with "C". More modern systems, with greater computing power, will guess the value. For example, if the previous word was "White" and the next word after the error is "Black", then it's reasonable for a modern TV to guess that the error value is "50% grey". (The HDMI cable can maintain lock, and still give you an error in the data.) This is the only way to deal with this because very robust math shows that the error can NEVER be zero. Re-sending data can be impractical: the broadcast control booth at the Super Bowl will get overwhelmed by requests to resend data because the game is going out to many users. The servers on YouTube, Netflix, iTunes, etc. will also have to see a minimal load from each user - and the best way to do that is to not have any requests to resend data. But when you re-read the data, the uncertainty can only be reduced, not eliminated. If you read some data four times, and got four A's, then it's a good bet that the value is A. If you got three A's and one B, then it's reasonable to conclude that it's an A and not a B. If you got two A's and two B's, then what do you do? You have to make a choice between A and B, or play the previous value again. Since there's doubt about the value, the safest strategy can be to play the previous value again. So we're back to square one. It is only when the number of errors exceeds a threshold that the error is considered fatal. The CD player will skip to the next track. Or the image will get pixelated for a few moments. Or the TV will play the video at lower resolution and then switch back to high resolution when it can. That's all kind of interesting, but it's not really what I'm talking about; I'm just talking about a USB DAC. Clearly the USB protocol, despite it's jitter and other issues, can be worked "on top of" to support bit perfect file transfers from a local computer to USB attached harddrive. Why can a DAC engineer not design a DAC this way? Yes at that point it's no longer a standard USB DAC that will work windows / mac's built-in usb audio support, but I'm sure it's possible to write drivers to handle it all. I think there are big differences in each interface. I think USB is not set up for fast access, or for random access. So it's hard for the receiver to request for data again. The alternative is to copy the data into something that allows fast random access - RAM memory or a hard disk. Loading a CD into RAM can definitely be done. RAM prices are nothing like what they were in the 1980's, when the CD standard hit the market. You can also copy the data into a hard disk and read the data locally - the Sony HAP-Z1ES does this. But of course, now you're putting a full-blown computer right into the same vicinity as the DAC. And you have to make sure that the DAC isn't affected by the computer.
|
|
|
Post by lionear on Jan 29, 2016 15:00:26 GMT -5
lionear, you are of course correct, Digital "ones" and "zeros" are actually voltage levels traveling in a cable. However, in a pure analog transmission the voltage, shape, rise time, etc. of the signal ARE the information. Since an analog signal, by definition, has infinite components, even a minuscule change can be considered a distortion (setting aside for a moment whether that can be audible or visible). On the other hand, in a digital transmission, an analog signal is encoded with the information, it is not the information itself. While the square/rectangular analog signal will for sure be distorted by the cable/wire, it has to get REALLY distorted before the receiver on the other end of the line loses the ability to decode the information. That is why for example Ethernet cable runs are generally limited to 100 meters and for HDMI you have repeiter chips, to "regenerate" the signal. There is no question that a cable can be worst than another, but whether in the analog or digital domain, to have an audible or visible impact on the signal, a cable would have to be of really abysmal quality, extremely long or just not the right cable for the application, like not spec'd to handle a high data rate, for example. I think the point of the article is that you do not have to spend a fortune on a boutique cable to minimize impact on signal integrity. It was not pretending to be a IEEE level paper on digital transmission. They set up what seems to be a well thought test system and presented the results. There is always something that can be argued about methodology, equipment, expectations and interpretations of results. That is what makes our hobby so fun . Fully agree - all this is fun, and in the end it's all about setting the tech stuff aside and enjoying the music. True, the digital data is encoded so that the specific voltage level doesn't matter as much as it would in an analog transmission. One can work with a range of voltages to signify 0 and a range of voltage to signify a 1. And when it comes to things like long distance phone calls and cell phones, digital audio is a spectacular success. It's also great for movies - I think George Lucas was appalled by how much film degraded when it was repeatedly played in the cinema, and forced a move to digital projectors.
|
|
|
Post by yves on Jan 29, 2016 16:07:42 GMT -5
That's all kind of interesting, but it's not really what I'm talking about; I'm just talking about a USB DAC. Clearly the USB protocol, despite it's jitter and other issues, can be worked "on top of" to support bit perfect file transfers from a local computer to USB attached harddrive. Why can a DAC engineer not design a DAC this way? Yes at that point it's no longer a standard USB DAC that will work windows / mac's built-in usb audio support, but I'm sure it's possible to write drivers to handle it all. I think there are big differences in each interface. I think USB is not set up for fast access, or for random access. So it's hard for the receiver to request for data again. The alternative is to copy the data into something that allows fast random access - RAM memory or a hard disk. Loading a CD into RAM can definitely be done. RAM prices are nothing like what they were in the 1980's, when the CD standard hit the market. You can also copy the data into a hard disk and read the data locally - the Sony HAP-Z1ES does this. But of course, now you're putting a full-blown computer right into the same vicinity as the DAC. And you have to make sure that the DAC isn't affected by the computer. Compared to eSATA and Firewire-800 the random access time of a typical harddrive via USB 2.0 is only about 0.3 milliseconds slower. It's not hard for the receiver to request for data again, and it all happens at the blink of an eye anyway so not sure what you must be getting at because the buffer memory in an asynchronous USB 2.0 input interface inside an external DAC unit still contains enough data to make perfectly sure no audio dropouts will result. The buffering causes some increased latency, but for listening to music at home, people shouldn't need to obsess over the fact that they have to wait a quarter of a second or so before the music begins to play. If you are monitoring live performances in the control room of a recording studio, that's when increased latencies do become important.
|
|
|
Post by jmilton on Jan 29, 2016 16:32:08 GMT -5
|
|
|
Post by pedrocols on Jan 29, 2016 16:38:03 GMT -5
So should I spend my precious time reading this?
|
|
|
Post by jmilton on Jan 29, 2016 16:43:08 GMT -5
So should I spend my precious time reading this? Well......no. But it is a media frenzy. AQ is going to be hard pressed to get their reputation back, whether the video was "faked" or not. Sort of like politics...the big winner here is Monoprice.
|
|
|
Post by garbulky on Jan 29, 2016 16:52:25 GMT -5
So what the owner of Audioquest said was basically It wasn't me. And shame on whoever it was. "My personality is such that I’m always crying “foul” over unrealistic claims, about representations of video or photographic differences which are obviously false, impossible laundry detergent claims or whatever " - Said the owner of the cable company that sells an $6800 power cable. www.amazon.com/AudioQuest-NRG-WEL-Signature-Series/dp/B0055OM9WS
|
|
|
Post by pedrocols on Jan 29, 2016 17:00:10 GMT -5
People with a lot of $$$ will still buy this product regardless...
|
|
|
Post by Loop 7 on Jan 29, 2016 17:43:25 GMT -5
"Cable-gate"
"Audioquest-gate"
|
|
|
Post by garbulky on Jan 29, 2016 17:44:02 GMT -5
There's been some wire-tapping going on.
|
|
KeithL
Administrator
Posts: 10,274
|
Post by KeithL on Jan 29, 2016 18:03:26 GMT -5
While a lot of what you say is true, you have omitted a few small but important details..... For example, the data on a CD is stored with extra information, specifically added to allow correction of data errors. If the data being read from the surface of the disc does in fact contain errors (which is not at all unlikely), there are two stages of "perfect error correction" - where the extra correction data will be used to repair/replace the missing or damaged data PERFECTLY. This process happens inside the CD player, and is totally transparent to the listener or user of the data. Only if we have damage so serious (equating to a hole larger than 2.5mm in the surface of the disc) will the data coming from the drive and its electronics be less than absolutely perfect. On a computer, if a single error remains after this correction - a single incorrect bit - the disc will stop playing with an error. Most audio players include a third level of "error correction", which is intended only as a "last ditch" option after the first two stages have failed, and which will "fill in" the gap with interpolated (guessed) data. So, no, unless there's something terribly wrong, even though the data read from your CD disc may contain flaws, the data that the drive passes on to you will in fact be perfect. (And most of the better computer audio ripping programs verify this perfection by comparing a checksum of the rip against a database - thus confirming that what you have is in fact perfect.) Of course, even though we know that the data is perfect, there is still the possibility of timing errors - jitter. However, there is also a simple way to avoid any ill effects from that as well... buffer the data, create a clock locally that you know is free from any significant amount of jitter, and then play the data using this new and near-perfect local clock. Most modern DACs do in fact use some variation on this idea. (And, yes, if the cable introduces jitter, and you've chosen a DAC that, not having any mechanism to avoid the problem, is sensitive to that jitter, then there might actually be an audible difference.) With an Asynchronous USB DAC, the clock itself is generated by the DAC, which requests data from the computer as needed, which ensures a near-perfect clock as long as the computer can keep up. (And, yes, if the computer fails to keep up, there may be data dropouts, which may affect the sound quality. Luckily, a high-speed USB connection is many times faster than necessary for sending flawless audio data. And, luckily, there are tools which can be used to spot check your system and confirm that, at least when the test is run, the system is delivering perfect data. Presumably, if your computer can deliver data for several minutes with zero errors, it's probably safe to assume that this is usually what's happening.) Your main error is in your claim that "it's impractical for data to be resent". With modern network data transmission, the exact opposite is true, it is EXPECTED that a certain amount of data will become damaged, and EXPECTED that either that data will have to be re-sent, or that extra data that can be used to perfectly repair errors will have to be routinely sent as a precaution. (On an Ethernet network we can even calculate, based on network speed and the current amount of traffic, what percentage of the data will be garbled, and how much of it will need to be retransmitted.) And, with any CD that complies with the Red Book standard, a significant amount of extra data, to be used for error correction purposes if necessary, is part of the data stored on every disc. (You are correct that some low-bandwidth connections do in fact choose to allow or ignore errors. This generally happens because of the tradeoff between data bandwidth and quality. In other words, they've decided that the loss in quality due to uncorrected errors will be less noticeable than the loss in quality that be the result of increasing the compression enough to make room for the correction data.) With an HDMI cable, the simple reality is that "subtle degradation of the signal" isn't very likely to occur, if it's even possible. If the signal is less than perfect, then the picture simply stops. (The only way there is any "grey area" there is that, if the picture stops for a small fraction of a second, your TV might "fill in the gap"... or try to and fail... or try to recover and use the undamaged portion of the data... this is what happens when your cable picture suddenly becomes all blocky and then comes back. However, you should notice that this is obvious when it happens.) Makers of expensive cables would like to convince you that, if a serious problem produces obvious picture dropouts and black screens, that some far more subtle problem may be "degrading your signal" in some harder to detect manner.... (it's a lot like those gasoline companies who would like you to believe that, even though your car runs equally well on any brand of gas, and your car always shows perfect health when it gets its tuneup, there is some intangible way that their gas is still better.) Now, it is theoretically possible that some monitors may respond poorly to jitter on the HDMi signal, and may benefit from a signal with less jitter, but I haven;t even seen proof of that... As someone said in another post, if someone wants you to pay extra money for an HDMI cable that's "better", then they should be able to show measurements demonstrating that it produces fewer dropped frames, or fewer bit errors, or sharper pixels. And, if they want to claim that it reduces jitter, and that this is a significant improvement, then they should be able to show both jitter measurements to back up that claim, and measurements or statistics showing which TV sets or pre/pros are affected by that improvement. And, yes, I'm sure your anecdote about Intel's problems with PCB layout problems is entirely authentic.... however, I am forced to note that my current computer runs at 3 gHz (that's 3,000 megaherts), and my computer seems able to operate at that speed for hours, and even days, without a single error.... so they seem to have gotten that problem pretty thoroughly licked But, yes, it would certainly be a bad mistake to ignore this stuff when designing a computer, or laying out a PCB... but that's not what we're talking about here. (And, yes, I do think those articles overstated their case.... I wouldn't have gone further than to say that the differences between a cheap HDMI cable and an expensive one ALMOST NEVER make a SIGNIFICANT difference - unless the cheap cable is so badly made that it fails to comply with the current HDMI standard.) I'm also inclined to agree with you that, as far as the industry is concerned, the MOST IMPORTANT aspect of HDMI is its support of "strong copy protection" (and I haven't actually seen anyone in the industry denying that). The added convenience and reduction in compatibility problems, while real, are most certainly "secondary benefits". These are terrible articles! It's common for the layman to think of a digital signal as consisting of zero's and one's. But the zero's and one's are an abstracted level - a "logic" level that does not exist in reality. In reality, a digital signal is a square wave analog signal. It has a lot of imperfections - a finite rise time, an overshoot, time taken for the voltage to settle, a finite slew rate. There will also be variations in amplitude, errors with timing (jitter). At the dawn of digital transmissions (1950's), engineers were puzzled that digital signal transmissions were prone to error. This cannot be cleared up by slowing the signal down, increasing the voltage, etc. The math modeling of digital signals showed that it is impossible to avoid this error. You cannot avoid noise in an analog signal, and you cannot avoid error in a digital signal. There's no free lunch. This led engineers to take a different approach: do error checking, and discard the "words" that fail the error check, and request that it be resent. This is, for example, what happens with the TCP/IP protocol: the data is in the IP portion. But you also send additional info in the TCP portion that allows the receiver to check the integrity of the data. There are other protocols - but all need a way to check the data. When it comes to music and video, it's not practical for the data be resent. The network will get clogged, or the device cannot go back and re-read the data (because the CD-ROM drive is not designed to read the data that way). Even if you store the data on a hard disk and go back and read the data, you're only postponing the issue. The error can never go to zero, no matter what you do. So a CD player will replay the previous "word". So what might be "ABCD" at the source will go into the converter circuitry as "ABBD" if there's an error with "C". Older HDTV's will do the same thing. More modern ones may try to guess the value. When it comes to HDMI, there are two different things going on. The sending component and receiving component will establish a "lock", and if this lock is lost, then there won't be any data transfer. But even after the lock has been established, the data transfer activity is going to be prone to error (see above). There's no way to avoid it. Is it possible that a "better" cable can make a difference? Yes - because sending a square wave analog signal can be affected by the wiring. High frequency signals can do strange things. Intel discovered that, once bus speeds reached a certain level, the PCB tracks on computers became vulnerable to interference, and they had to hire engineers who knew about microwaves, and do a lot of work on the PCB tracks. This is "old school" analog engineering - and it would be a mistake to ignore this stuff just because we're talking about a computer. What was the bus speed that started to give Intel trouble? 50kHz - the CD sampling rate. BTW: I've always wondered why the HDMI interface establishes the lock. I suspect it has more to do with copy protection strategies than data integrity. The first phase is to lull us into accepting HDMI as "technically superior". The second step is to activate restrictions.
|
|
|
Post by Loop 7 on Jan 29, 2016 18:09:25 GMT -5
There's been some wire-tapping going on. Awesome.
|
|
|
Post by monkumonku on Jan 29, 2016 18:10:43 GMT -5
So should I spend my precious time reading this? Well......no. But it is a media frenzy. AQ is going to be hard pressed to get their reputation back, whether the video was "faked" or not. Sort of like politics...the big winner here is Monoprice. While this purported fraudulent video demo was done by a 3rd party not under the direction of AQ, the owner was still aware of its existence and knew the results, and also thought the results were not plausible. Yet he did nothing to investigate it. That's like watching a crime being committed (which it was) and looking the other way. As for AQ being "hard pressed" to get their reputation back, I dunno... I am thinking that for many, what happened just confirms their reputation. And as for the believers in the cable, I doubt that there will be much impact on them especially since the video was not made by AQ itself.
|
|