|
Post by sahmen on Jun 3, 2019 10:13:38 GMT -5
sahmen - will you be selling your Sonore Ultrarendu once your Optical Rendu arrives? Hi Loop : I'm taking a rather cautious "wait & see" approach to this entire upgrade scenario, beginning with the optical module (paired with the UltraRendu) before actually upgrading to the opticalRendu... That is because I need to find out how well the opticalRendu is going to be received before taking the plunge. All the more so because it is being released with a set of accessories that still need to be tested, reviewed and reported on by end-users. That "wait & see approach" is meant to enable me test the waters somewhat before actually jumping on board. To be fair early reviews of the opticalRendu are quite favorable, so far, but we still do not know everything yet since the full roll-out of related components only started yesterday (with the release of the optical module, which is the final link in the chain, on the Sonore side of things). I am also waiting for the EtherRegen, which might turn out, when finally released, to be a better substitute for Sonore's own optical module, as an FMC unit... So there is still a lot of experimentation to do and read about, before I make any firm decisions about what new toys from this rollout to get, and which ones to ignore... With that said, I already have a sort of tentative answer to your question... So far I have only ordered an optical module to try out with my ultrarendu, but I strongly suspect that I shall have at least one opticalRendu in my system before the end of the 3rd quarter of this year.... Eventually, I intend to experiment to see how close in performance the pairing of an optical module or etherRegen with the Ultrarendu can bring it to the opticalRendu (also paired with an etherRegen or optical module)... If the differences between the two set-ups only appear to consists in small subtleties (as I suspect they might), I shall keep the Ultrarendu for use in a 2nd system. I am going to consider selling the Ultrarendu, only if the differences are very huge, to the point where I feel that the Ultrarendu has been left far behind the curve by the opticalRendu (I have to say that I do not expect that to be the case, but I'm not sure)... That is what I am thinking now, although it is possible that this feeling might change as I continue to read reviews about the new components and the efficacy of the new pairings one might experiment with...
|
|
KeithL
Administrator
Posts: 9,941
|
Post by KeithL on Jun 3, 2019 10:22:29 GMT -5
I'm inclined to agree there...
It seems like a really cool solution that is unlikely to solve a problem that luckily doesn't actually exist anyway. And I'm sure it will make perfect sense to audiophiles who don't know much about networking.
(The upside is that it is unlikely to actually hurt anything other than your wallet.)
Yes, IF you have a noise problem, then adding galvanic isolation at some point near the end of the signal chain should help.
However, in general, in order to minimize noise issues, you would want to isolate the network stuff from the audio stuff. And the best way to do this would be to put galvanic isolation either in or right before the DAC.
Putting an optical link between two pieces of network gear really doesn't seem to contribute significantly to this goal. Also, as LuisV mentioned already, most consumer switches don't have an optical port to connect this to.
(Now, if you actually have a switch with optical ports, I guess you might as well use them, and this may be the only game in town there.)
Just to be perfectly fair here..... I have no reason whatsoever to suspect that it won't work - and sound - just fine.
There is no downside to an optical network connection - except cost.
However, that doesn't answer the basic question.... of whether it's an actual useful improvement to anything else or not.
This is playing to a new neuroses with that CA community. They claim that using a fiber ethernet cable improves audio performance. Not sure I would invest in this fad.
|
|
KeithL
Administrator
Posts: 9,941
|
Post by KeithL on Jun 3, 2019 10:34:20 GMT -5
It's been around for years... In the past, if you needed a really fast network or other data connection, you went with fiber...
It was used both for networking and for super-fast connections to drive arrays... And, nowadays, most Internet, cable TV, and telephone backbones are fiber... In terms of audio data it's equivalent to using a 12" stainless steel high pressure gas main to fill a shot glass... But it should work just fine - and, at the low end, it's gotten affordable... (And, yes, being optical, it has perfect galvanic electrical isolation...)
However, since things like gigabit Ethernet have also gotten so cheap so fast, you still don't see much fiber used locally...
They're simply giving you the option of using a standard fiber Ethernet connection instead of a copper one.
To take some liberties with paraphrasing a quote... "If you make it, and it sounds cool enough, some audiophiles will buy it."
Its Optical/Fiber Ethernet Connection to USB output.. Not Optical S/PDIF output. Thanks. I am not familiar with that methodology. Hmm.
|
|
novisnick
EmoPhile
CEO Secret Monoblock Society
Posts: 27,223
|
Post by novisnick on Jun 3, 2019 13:09:41 GMT -5
I’ve been using soTm’s optical input on my sNH-10G Audio Switch Hub for a few months now. Just haven’t had time to do a review. Long story short for the moment there has been a noticeable improvement in SQ of streamed music via Roon. A fuller bodied sound. where is the closest petrol station
|
|
|
Post by sahmen on Jun 5, 2019 9:09:41 GMT -5
I'm inclined to agree there...
It seems like a really cool solution that is unlikely to solve a problem that luckily doesn't actually exist anyway. And I'm sure it will make perfect sense to audiophiles who don't know much about networking.
(The upside is that it is unlikely to actually hurt anything other than your wallet.)
Yes, IF you have a noise problem, then adding galvanic isolation at some point near the end of the signal chain should help.
However, in general, in order to minimize noise issues, you would want to isolate the network stuff from the audio stuff. And the best way to do this would be to put galvanic isolation either in or right before the DAC.
Putting an optical link between two pieces of network gear really doesn't seem to contribute significantly to this goal. Also, as LuisV mentioned already, most consumer switches don't have an optical port to connect this to.
(Now, if you actually have a switch with optical ports, I guess you might as well use them, and this may be the only game in town there.)
Just to be perfectly fair here..... I have no reason whatsoever to suspect that it won't work - and sound - just fine.
There is no downside to an optical network connection - except cost.
However, that doesn't answer the basic question.... of whether it's an actual useful improvement to anything else or not.
This is playing to a new neuroses with that CA community. They claim that using a fiber ethernet cable improves audio performance. Not sure I would invest in this fad. KeithL : I know this might never happen, but I think you should really have a conversation with John Swenson about the sonic benefits of "isolation" in digital audio. Not to worry: in the unlikely event of that ever happening, I would simply request that you both permit me to be a fly on the wall, with a notepad. It will by my honor to learn from you both as you sort out your differences in opinion about this subject... For starters, here's what John has to say about the advantages of using an optical module (an FMC unit), with the ultrarendu and the opticalrendu. : Link : audiophilestyle.com/forums/topic/55217-sonore-opticalrendu/?do=findComment&comment=963599"The understanding of "isolation" in digital audio has been my passion for at least 10 years. There is a LOT of misunderstanding on the subject floating around in audio circles. Here is a quick summary of my current understanding and how the current products fit in with this. There seems to be TWO independent mechanisms involved: leakage current and clock phase noise. Various amounts of these two exist in any system. Different "isolation" technologies out there address one or the other, but very rarely both at the same time. Some technologies that attenuate one actually increase the other. Thus the massively confusing information out there. Leakage current is a property of power supplies. It is the leakage of AC mains frequency (50/60 Hz) into the DC output. It is usually common mode (ie exists on BOTH the + and - wires at the same time, this makes it a bit difficult to see. There seems to be two different types, one that comes from linear supplies and is fairly easy to block, and an additional type that comes from SMPS and is MUCH harder to block. An SMPS contains BOTH types. They are BOTH line frequency. Unfortunately in our modern times where essentially all computer equipment is powered by SMPS we have to deal with this situation of both leakage types coming down cables from our computer equipment. There are many devices on the market (I have designed some of them) for both USB and Ethernet, most can deal with the type from linear supplies but only a few can deal with the type from SMPS. Optical connections (when the power supplies are completely isolated from each other) CAN completely block all forms of leakage, it is extremely effective. Optical takes care of leakage, but does not deal with the second mechanism. Clock phase noise Phase noise is a frequency measurement of "jitter", yes that term that is so completely mis-understood in audio circles that I'm not going to use it. Phase noise is a way to look at the frequency spectrum of jitter, the reason to use it is that there seems to be fairly decent correlation to sound quality. Note this has nothing to do with "pico seconds" or "femto seconds". Forget those terms, they do not directly have meaning in audio, what matters is the phase noise. Unfortunately phase noise is shown on a graph, not a single number, so it is much harder to directly compare units. This subject is HUGE and I'm not going to go into any more detail here. Different oscillators (the infamous "clocks" that get talked about) can have radically different phase noise. The level of phase noise that is very good for digital audio is very difficult to achieve and costs money. The corollary is that the cheap clocks used in most computer equipment (including network equipment) produce phase noise that is very bad for digital audio. The important thing to understand is that ALL digital signals carry the "fingerprint" of the clock used to produce them. When a signal coming from a box with cheap clocks comes into a box (via Ethernet or USB etc) with a much better clock, the higher level of phase noise carried on the data signal can contaminate the phase noise of the "good" clock in the second box. Exactly how this happens is complicated, I've written about this in detail if you want to look it up and see what is going on. The contamination is not complete, every time the signal gets "reclocked" by a much better clock the resulting signal carries an attenuated version of the first clock layered on top of the fingerprint of the second clock. The word "reclocked" just means the signal is regenerated by a circuit fed a different clock. It may be a better or a worse clock, reclocking doesn't always make things better! As an example if you start with an Ethernet signal coming out of a cheap switch, the clock fingerprint is going to be pretty bad. If this goes into a circuit with a VERY good clock, the signal coming out contains a reduced fingerprint from the first clock layered on top of the good clock. If you feed THIS signal into another circuit with a very good clock, the fingerprint from the original clock gets reduced even further. But if you feed this signal into a box with a bad clock, you are back to a signal with a bad fingerprint. The summary is that stringing together devices with GOOD clocking can dramatically attenuate the results of an upstream bad clock. The latest devices form Sonore take on BOTH of these mechanisms that effect sound: optical for blocking leakage and multiple reclocking with very good clocks. The optical part should be obvious. A side benefit of the optical circuit is that is completely regenerates the signal with a VERY low phase noise clock, this is a one step reclocking. It attenuates effects from upstream circuits but does not completely get rid of them. This is where the opticalModule comes into play, if you put an opticalModule in the path to the opticalRendu you are adding another reclocking with VERY good clocking. The result is a very large attenuation of upstream effects. It's not completely zero, but it is close. The fact that the opticalRendu is a one stage reclocking (which leaves some effects from upstage circuits) is why changing switches etc can still make a difference. Adding an OpticalModule between the switch and opticalRendu reduces that down to vanishingly small differences. So an optical module by itself adds both leakage elimination and significant clock effects attenuation. TWO optical modules in series give you the two level reclocking . An opticalRendu still has some significant advantages over say an ultraRendu fed by a single opticalModule, the circuitry inside the opticalRendu has been improved significantly over the ultraRendu. (it uses new parts that did not exist when the ultraRendu was designed). In addition the opticalRendu has the reclocking taking place a couple millimeters away from the processor which cuts out the effects of a couple connectors, transformers and cable. The result is the opticalRendu has some significant advantages. An opticalModule feeding an ultraRendu does significantly improve it, but not as much as an opticalRendu. So you can start with an opticalModule, then when you can afford it add an opticalRendu, also fed by the opticalModule and get a BIG improvement. I hope this gives a little clarity to the situation. John S.
|
|
|
Post by Loop 7 on Jun 5, 2019 10:01:35 GMT -5
sahmen -- your recent post was EXTREMELY informative! Cleared up a few things about which I was wondering, specifically the fingerprint of a the clock.
|
|
KeithL
Administrator
Posts: 9,941
|
Post by KeithL on Jun 5, 2019 14:59:34 GMT -5
To be quite candid, some of what he says makes good sense, but some of it only makes partial sense, and only in certain contexts. (Although, from the standpoint of "marketing to common audiophile mythology", it all plays very well.) First off, his explanation about leakage current and leakage noise seems to be perfectly reasonable... Those things really exist, they can cause problems with downstream circuitry, and an optical connection will entirely eliminate them... This is a long winded way of saying that "an optical connection provides perfect galvanic isolation". (Although I'm not quite convinced that this is as significant in most situations as he believes it is...) However, the idea that DIGITAL DATA can "retain a fingerprint of a bad clock" rates right up there on the credibility scale with homeopathy. (To be perfectly fair, it's an interesting way to look at, and perhaps to explain certain real things... but as stated the basic premise is simply not true. There is no "attenuated version of the original clock carried on the data" unless your particular circuitry is mixing things into the signal that don't belong there.) DIGITAL DATA IS A STRING OF ONES AND ZEROS. THAT DATA IS EITHER CORRECT OR NOT. IT IS ABSOLUTELY POSSIBLE THAT A BAD CLOCK COULD CAUSE YOU TO READ THOS EONES AND ZEROS INCORRECTLY. IT CANNOT "STILL BE CORRECT BUT SOMEHOW BE SUBTLY ALTERED". THE CLOCK IS A SEPARATE ENTITY FROM THE DATA.(The only spot where the relationship between them is critical is at the DAC interface itself.) The "fingerprint effect" he is talking about is what happens when you start out with a signal that is associated with a flawed clock... If you then INCOMPLETELY re-clock that signal, some of the flaws of the original clock may "leak through" into the new clock... (The so-called "fingerprint" is an illusion produced by this failure to completely eliminate the original flawed clock.) However, if you COMPLETELY RE-CLOCK the signal, then it will again be a new and perfect signal... To be fair, actually doing so can be difficult, and may in fact quite often NOT be done correctly. However, if a digital signal is completely and properly re-clocked, then there is no possibility that any trace of any flaws present in the original clock will remain. (therefore, doing one "perfect re-clocking step" is exactly as good as doing it a million times.) Note that, in the past, many "re-clocking methods" were not perfect or complete. (And, so, in that situation, doing it over and over again could actually produce a small incremental improvement each time.) For example: Let's say we're talking about a standard digital audio signal... in PCM format... on a coaxial cable. It is quite true that signal is likely to acquire some interesting clock jitter of assorted types. And, if you re-clock it with an old-style phase-locked loop, you will reduce but not entirely eliminate all of those flaws. Each PLL creates a "new clock", which is actually "locked" to the original clock, and so is affected by it to a degree. Therefore, each PLL stage acts as a sort of filter to "reduce the impurities in the clock as it passes through". Even if you use multiple staged PLLs, because each one uses a clock derived from the previous stage, some of the original error "leaks through". (Each PLL reduces the jitter by some number of dB, which is different for different frequencies, and for different types of jitter.) However, unfortunately, each PLL stage also introduces its own small quantity of new errors. Now, let's look at a modern ASRC (asynchronous sample rate converter)... An ASRC does some very cool stuff, and the internal workings are remarkably complex, but underneath it all, it basically works as a sort of super-duper-PLL... The bottom line is that, while an ARCS reduces jitter a lot, and much better than a PLL, it doesn't entirely remove it. (The normal reduction, quoted by Analog Devices for the AD1896 we use, is something over 40 dB of reduction across the audio band. Paraphrased loosely, they claim that the difference between the output of the ASRC, and an equivalent theoretical perfect signal, is below about -130 dB.) HOWEVER, the situation is rather different when you're talking about an Ethernet to USB bridge. In that sort of device, the actual data is received, stored in a buffer, then retrieved from the buffer, and recreated - WITH AN ENTIRELY NEW CLOCK. (IN theory, you could attach a hard drive to that interface, send the audio to it via Ethernet today, and play it back tomorrow, or next week.) The new clock should be entirely independent of the original Ethernet clock - so there should be no opportunity for any "fingerprint" to "leak through". (He is entirely correct. The clock used by Ethernet switches is far from "audio quality" - which is why it isn't used for the audio signal.) It is possible that noise, or even the Ethernet clock itself, could somehow affect the circuitry inside the converter. And, if so, it could cause flaws in the newly generated output data clock. (And, if so, it certainly would be an engineering problem, and require an engineering solutoion, which an optical connection might well be part of...) However, a more appropriate analogy would be of a really loud printer in the next room causing you to make errors when re-typing the data. I'm inclined to agree there...
It seems like a really cool solution that is unlikely to solve a problem that luckily doesn't actually exist anyway. And I'm sure it will make perfect sense to audiophiles who don't know much about networking.
(The upside is that it is unlikely to actually hurt anything other than your wallet.)
Yes, IF you have a noise problem, then adding galvanic isolation at some point near the end of the signal chain should help.
However, in general, in order to minimize noise issues, you would want to isolate the network stuff from the audio stuff. And the best way to do this would be to put galvanic isolation either in or right before the DAC.
Putting an optical link between two pieces of network gear really doesn't seem to contribute significantly to this goal. Also, as LuisV mentioned already, most consumer switches don't have an optical port to connect this to.
(Now, if you actually have a switch with optical ports, I guess you might as well use them, and this may be the only game in town there.) Just to be perfectly fair here..... I have no reason whatsoever to suspect that it won't work - and sound - just fine.
There is no downside to an optical network connection - except cost.
However, that doesn't answer the basic question.... of whether it's an actual useful improvement to anything else or not.
KeithL : I know this might never happen, but I think you should really have a conversation with John Swenson about the sonic benefits of "isolation" in digital audio. Not to worry: in the unlikely event of that ever happening, I would simply request that you both permit me to be a fly on the wall, with a notepad. It will by my honor to learn from you both as you sort out your differences in opinion about this subject... For starters, here's what John has to say about the advantages of using an optical module (an FMC unit), with the ultrarendu and the opticalrendu. : Link : audiophilestyle.com/forums/topic/55217-sonore-opticalrendu/?do=findComment&comment=963599"The understanding of "isolation" in digital audio has been my passion for at least 10 years. There is a LOT of misunderstanding on the subject floating around in audio circles. Here is a quick summary of my current understanding and how the current products fit in with this. There seems to be TWO independent mechanisms involved: leakage current and clock phase noise. Various amounts of these two exist in any system. Different "isolation" technologies out there address one or the other, but very rarely both at the same time. Some technologies that attenuate one actually increase the other. Thus the massively confusing information out there. Leakage current is a property of power supplies. It is the leakage of AC mains frequency (50/60 Hz) into the DC output. It is usually common mode (ie exists on BOTH the + and - wires at the same time, this makes it a bit difficult to see. There seems to be two different types, one that comes from linear supplies and is fairly easy to block, and an additional type that comes from SMPS and is MUCH harder to block. An SMPS contains BOTH types. They are BOTH line frequency. Unfortunately in our modern times where essentially all computer equipment is powered by SMPS we have to deal with this situation of both leakage types coming down cables from our computer equipment. There are many devices on the market (I have designed some of them) for both USB and Ethernet, most can deal with the type from linear supplies but only a few can deal with the type from SMPS. Optical connections (when the power supplies are completely isolated from each other) CAN completely block all forms of leakage, it is extremely effective. Optical takes care of leakage, but does not deal with the second mechanism. Clock phase noise Phase noise is a frequency measurement of "jitter", yes that term that is so completely mis-understood in audio circles that I'm not going to use it. Phase noise is a way to look at the frequency spectrum of jitter, the reason to use it is that there seems to be fairly decent correlation to sound quality. Note this has nothing to do with "pico seconds" or "femto seconds". Forget those terms, they do not directly have meaning in audio, what matters is the phase noise. Unfortunately phase noise is shown on a graph, not a single number, so it is much harder to directly compare units. This subject is HUGE and I'm not going to go into any more detail here. Different oscillators (the infamous "clocks" that get talked about) can have radically different phase noise. The level of phase noise that is very good for digital audio is very difficult to achieve and costs money. The corollary is that the cheap clocks used in most computer equipment (including network equipment) produce phase noise that is very bad for digital audio. The important thing to understand is that ALL digital signals carry the "fingerprint" of the clock used to produce them. When a signal coming from a box with cheap clocks comes into a box (via Ethernet or USB etc) with a much better clock, the higher level of phase noise carried on the data signal can contaminate the phase noise of the "good" clock in the second box. Exactly how this happens is complicated, I've written about this in detail if you want to look it up and see what is going on. The contamination is not complete, every time the signal gets "reclocked" by a much better clock the resulting signal carries an attenuated version of the first clock layered on top of the fingerprint of the second clock. The word "reclocked" just means the signal is regenerated by a circuit fed a different clock. It may be a better or a worse clock, reclocking doesn't always make things better! As an example if you start with an Ethernet signal coming out of a cheap switch, the clock fingerprint is going to be pretty bad. If this goes into a circuit with a VERY good clock, the signal coming out contains a reduced fingerprint from the first clock layered on top of the good clock. If you feed THIS signal into another circuit with a very good clock, the fingerprint from the original clock gets reduced even further. But if you feed this signal into a box with a bad clock, you are back to a signal with a bad fingerprint. The summary is that stringing together devices with GOOD clocking can dramatically attenuate the results of an upstream bad clock. The latest devices form Sonore take on BOTH of these mechanisms that effect sound: optical for blocking leakage and multiple reclocking with very good clocks. The optical part should be obvious. A side benefit of the optical circuit is that is completely regenerates the signal with a VERY low phase noise clock, this is a one step reclocking. It attenuates effects from upstream circuits but does not completely get rid of them. This is where the opticalModule comes into play, if you put an opticalModule in the path to the opticalRendu you are adding another reclocking with VERY good clocking. The result is a very large attenuation of upstream effects. It's not completely zero, but it is close. The fact that the opticalRendu is a one stage reclocking (which leaves some effects from upstage circuits) is why changing switches etc can still make a difference. Adding an OpticalModule between the switch and opticalRendu reduces that down to vanishingly small differences. So an optical module by itself adds both leakage elimination and significant clock effects attenuation. TWO optical modules in series give you the two level reclocking . An opticalRendu still has some significant advantages over say an ultraRendu fed by a single opticalModule, the circuitry inside the opticalRendu has been improved significantly over the ultraRendu. (it uses new parts that did not exist when the ultraRendu was designed). In addition the opticalRendu has the reclocking taking place a couple millimeters away from the processor which cuts out the effects of a couple connectors, transformers and cable. The result is the opticalRendu has some significant advantages. An opticalModule feeding an ultraRendu does significantly improve it, but not as much as an opticalRendu. So you can start with an opticalModule, then when you can afford it add an opticalRendu, also fed by the opticalModule and get a BIG improvement. I hope this gives a little clarity to the situation. John S.
|
|
|
Post by sahmen on Jun 5, 2019 15:49:26 GMT -5
To be quite candid, some of what he says makes good sense, but some of it only makes partial sense, and only in certain contexts. (Although, from the standpoint of "marketing to common audiophile mythology", it all plays very well.) First off, his explanation about leakage current and leakage noise seems to be perfectly reasonable... Those things really exist, they can cause problems with downstream circuitry, and an optical connection will entirely eliminate them... This is a long winded way of saying that "an optical connection provides perfect galvanic isolation". (Although I'm not quite convinced that this is as significant in most situations as he believes it is...) However, the idea that DIGITAL DATA can "retain a fingerprint of a bad clock" rates right up there on the credibility scale with homeopathy. (To be perfectly fair, it's an interesting way to look at, and perhaps to explain certain real things... but as stated the basic premise is simply not true. There is no "attenuated version of the original clock carried on the data" unless your particular circuitry is mixing things into the signal that don't belong there.) DIGITAL DATA IS A STRING OF ONES AND ZEROS. THAT DATA IS EITHER CORRECT OR NOT. IT IS ABSOLUTELY POSSIBLE THAT A BAD CLOCK COULD CAUSE YOU TO READ THOS EONES AND ZEROS INCORRECTLY. IT CANNOT "STILL BE CORRECT BUT SOMEHOW BE SUBTLY ALTERED". THE CLOCK IS A SEPARATE ENTITY FROM THE DATA.(The only spot where the relationship between them is critical is at the DAC interface itself.) The "fingerprint effect" he is talking about is what happens when you start out with a signal that is associated with a flawed clock... If you then INCOMPLETELY re-clock that signal, some of the flaws of the original clock may "leak through" into the new clock... (The so-called "fingerprint" is an illusion produced by this failure to completely eliminate the original flawed clock.) However, if you COMPLETELY RE-CLOCK the signal, then it will again be a new and perfect signal... To be fair, actually doing so can be difficult, and may in fact quite often NOT be done correctly. However, if a digital signal is completely and properly re-clocked, then there is no possibility that any trace of any flaws present in the original clock will remain. (therefore, doing one "perfect re-clocking step" is exactly as good as doing it a million times.) Note that, in the past, many "re-clocking methods" were not perfect or complete. (And, so, in that situation, doing it over and over again could actually produce a small incremental improvement each time.) For example: Let's say we're talking about a standard digital audio signal... in PCM format... on a coaxial cable. It is quite true that signal is likely to acquire some interesting clock jitter of assorted types. And, if you re-clock it with an old-style phase-locked loop, you will reduce but not entirely eliminate all of those flaws. Each PLL creates a "new clock", which is actually "locked" to the original clock, and so is affected by it to a degree. Therefore, each PLL stage acts as a sort of filter to "reduce the impurities in the clock as it passes through". Even if you use multiple staged PLLs, because each one uses a clock derived from the previous stage, some of the original error "leaks through". (Each PLL reduces the jitter by some number of dB, which is different for different frequencies, and for different types of jitter.) However, unfortunately, each PLL stage also introduces its own small quantity of new errors. Now, let's look at a modern ASRC (asynchronous sample rate converter)... An ASRC does some very cool stuff, and the internal workings are remarkably complex, but underneath it all, it basically works as a sort of super-duper-PLL... The bottom line is that, while an ARCS reduces jitter a lot, and much better than a PLL, it doesn't entirely remove it. (The normal reduction, quoted by Analog Devices for the AD1896 we use, is something over 40 dB of reduction across the audio band. Paraphrased loosely, they claim that the difference between the output of the ASRC, and an equivalent theoretical perfect signal, is below about -130 dB.) HOWEVER, the situation is rather different when you're talking about an Ethernet to USB bridge. In that sort of device, the actual data is received, stored in a buffer, then retrieved from the buffer, and recreated - WITH AN ENTIRELY NEW CLOCK. (IN theory, you could attach a hard drive to that interface, send the audio to it via Ethernet today, and play it back tomorrow, or next week.) The new clock should be entirely independent of the original Ethernet clock - so there should be no opportunity for any "fingerprint" to "leak through". (He is entirely correct. The clock used by Ethernet switches is far from "audio quality" - which is why it isn't used for the audio signal.) It is possible that noise, or even the Ethernet clock itself, could somehow affect the circuitry inside the converter. And, if so, it could cause flaws in the newly generated output data clock. (And, if so, it certainly would be an engineering problem, and require an engineering solutoion, which an optical connection might well be part of...) However, a more appropriate analogy would be of a really loud printer in the next room causing you to make errors when re-typing the data. KeithL : Thanks for your patient and carefully worded response. Personally, I do not have enough expertise to adjudicate in this debate, but I do see a lot of convergence between what you have said, and the positions John S. is developing. Still, it seems to me that you have a a little bit more confidence in the corrective work of "good clocking" than John does, although there might yet be more points of disagreement between you, some of which escape me at the moment. Now regarding this particular issue of clocking, this other post of John S. might be interesting, since it seems to address some of the objections you have raised, if only partially... At any rate, I still think a direct conversation between you and John S, about this subject would be productive and fascinating. I for one would learn quite a bit from it, even though it still strikes me as something that is unlikely, and I do not know exactly why. With that said, here is the post I was just referring to: audiophilestyle.com/forums/topic/55217-sonore-opticalrendu/?do=findComment&comment=963768All the optical does is block leakage, it doesn't get rid of clocking issues at all (it can actually make them worse). The fact that it is optical does not automatically apply some universal quantum time scheme that mystically aligns edges perfectly, If you send in a pulse, then another that is 50ns apart, then another at 51ns, then another at 49, that difference gets preserved at the receiver, the optical does not magically force all of them to be exactly 50ns. The raw data coming out of the optical receiver goes into a chip that rebuilds the Ethernet signal using its own local clock, that is done with flip flops inside the chip, these flop flops behave just like any other flip flops, again no magic here. I was trying to avoid re-iterating what I have said before on this, but it looks like I'm going to have to do it anyway. So how come this reclocking with a new clock is not perfect? As edges from the input stream go into a circuit each and every one of those edges creates a current pulse on the power and ground network inside the chip and on the board. The timing of that pulse is exactly related to the timing of the input data. The timing of the input data is directly related to the jitter on the clock producing the stream. This noise on the PG network changes the threshold voltage of anything receiving data inside the chip, especially the local clock going into the chip. This means the phase noise spectrum of the data coming in gets overlayed on top of the phase noise spectrum of the local clock. It's attenuated from what it is in the source box, but it is definitely still there. THAT is how phase noise gets from one device to the next, EVEN over optical connections. If you look at this in a system containing all uniformly bad clocks, you don't particularly see this, since they are all bad to begin with. BUT when you go from a bad to a very good clock you can definitely see this contamination of the really good clock by the overlaying of the bad clock. This is really hard to directly measure because most of the effect is happening inside the flip flop chip itself. You CAN see the effect on the data coming out of the flip flop. This process happens all the way down the chain, Ethernet to USB, USB into DAC box, and inside the DAC chips themselves, finally winding up on the analog out. Wherever reclocking is happening, how strong this overlay is depends primarily on the impedance of the power and ground network, both on boards and inside chips. A lower impedance PG network produces lower clock overlay, higher PG impedance give stronger overlay. This is something that is difficult to find out about a particular chip, the impedance of the PG network is NEVER listed in the data sheets! I have somewhat of an advantage here having spent 33 years in the semiconductor industry, spending a lot of time designing PG networks in chips, I have some insight into which chips look like good candidates for low impedance PG networks. On a side note, because Ethernet and USB are packet systems the receiving circuit CAN use a completely separate clock, the frequency just has to be close enough to handle the small number of bits in the packet. If it is a little to slow or too fast the difference is made up in the dead time between packets. To reiterate none of this has ANYTHING to do with accurately reading bits, this is assumed. It IS all about high jitter on network clocks working its way down through reclockings to the DAC chips and hence to audio outs. All the work done on DACs in recent years has cleaned up the signals so dramatically that these effects are getting to be audible in many systems. John S.
|
|
KeithL
Administrator
Posts: 9,941
|
Post by KeithL on Jun 5, 2019 18:59:15 GMT -5
From those later quotes it seems like we're in relatively close agreement. Sometimes the technical subtleties can become lost in the wording, and in attempts to phrase things in understandable terms. There is also a significant gap between theory and practice. A theoretically perfect audio signal would contain data and a clock - with no added noise and no clock jitter. A theoretically perfect DAC would be totally immune to the effects of any noise or jitter that might be present on the input signal. And a perfect data re-clocker would deliver perfect data at its output - with absolutely no trace of any flaws present at its input. Unfortunately, in the real world, we have neither perfect data, nor perfect electronic devices. I would, however, take one thing he said a bit further. Yes, the incoming electrical signal itself has the ability to cause perturbations in the circuitry, which may in turn affect the jitter on the output signal. HOWEVER, because the timing on the Ethernet packets is essentially unrelated to the timing on the digital audio data itself..... The electrical noise generated by the incoming Ethernet packets being sinked by the receiver has the potential to cause interference even if it is perfectly jitter free. Since the incoming packets are not time-related to the output signal it is their very presence that has the potential to cause the problem. I'm not necessarily convinced that Ethernet packets with a lot of jitter would cause significantly more of a problem than packets with lower or no jitter. However, this detail is moot; either way it suggests that great care must be taken to prevent the incoming Ethernet signal from affecting the output signal. There are, however, two points I would stress whenever considering any sort of device intended to improve signal quality. (That includes a device like the Rendu, as well as many of the "data reclockers" and "USB cleaners" out there...) The first is that, in almost any signal chain, there are one or more weak points which limit the overall performance. For example, if your input source has more jitter than the input circuitry on your DAC, then reducing that jitter has the potential to improve sound quality. But, if your source already has far less jitter than the input circuitry on your DAC, then reducing it further is unlikely to make any difference. (And doing so is just throwing away money that could be better spent elsewhere...) The other point, somewhat related to the first, is that..... DACs vary widely, both in terms of their own internal performance limitations, and of their tolerance to flaws in the source signal. At this point a few folks might wish to chime in and complain that Emotiva doesn't specify these sorts of measurements on our equipment. The reason for this, which is the same reason why these factors are so complex, is itself relatively simple.... These sorts of characteristics are incredibly difficult to measure... And, even if we did measure them, there is no standard for comparing them between different products... And, even beyond that, what the measurements actual mean, in purely practical terms, is quite complex... For example, if you read certain other audiophile forums, you'll find wide disagreement about how much jitter is audible... However, what I really want you to take away from that second point is this: Whether a device like this produces an audible improvement or not is going to depend on a lot of factors... So DO NOT assume that the results someone else experiences, or fails to experience, will or will not apply to you and your system. (None of these devices will "universally make whatever you connect them to sound better"... but some certainly will make some systems sound better.) To be quite candid, some of what he says makes good sense, but some of it only makes partial sense, and only in certain contexts. (Although, from the standpoint of "marketing to common audiophile mythology", it all plays very well.) First off, his explanation about leakage current and leakage noise seems to be perfectly reasonable... Those things really exist, they can cause problems with downstream circuitry, and an optical connection will entirely eliminate them... This is a long winded way of saying that "an optical connection provides perfect galvanic isolation". (Although I'm not quite convinced that this is as significant in most situations as he believes it is...) However, the idea that DIGITAL DATA can "retain a fingerprint of a bad clock" rates right up there on the credibility scale with homeopathy. (To be perfectly fair, it's an interesting way to look at, and perhaps to explain certain real things... but as stated the basic premise is simply not true. There is no "attenuated version of the original clock carried on the data" unless your particular circuitry is mixing things into the signal that don't belong there.) DIGITAL DATA IS A STRING OF ONES AND ZEROS. THAT DATA IS EITHER CORRECT OR NOT. IT IS ABSOLUTELY POSSIBLE THAT A BAD CLOCK COULD CAUSE YOU TO READ THOS EONES AND ZEROS INCORRECTLY. IT CANNOT "STILL BE CORRECT BUT SOMEHOW BE SUBTLY ALTERED". THE CLOCK IS A SEPARATE ENTITY FROM THE DATA.(The only spot where the relationship between them is critical is at the DAC interface itself.) The "fingerprint effect" he is talking about is what happens when you start out with a signal that is associated with a flawed clock... If you then INCOMPLETELY re-clock that signal, some of the flaws of the original clock may "leak through" into the new clock... (The so-called "fingerprint" is an illusion produced by this failure to completely eliminate the original flawed clock.) However, if you COMPLETELY RE-CLOCK the signal, then it will again be a new and perfect signal... To be fair, actually doing so can be difficult, and may in fact quite often NOT be done correctly. However, if a digital signal is completely and properly re-clocked, then there is no possibility that any trace of any flaws present in the original clock will remain. (therefore, doing one "perfect re-clocking step" is exactly as good as doing it a million times.) Note that, in the past, many "re-clocking methods" were not perfect or complete. (And, so, in that situation, doing it over and over again could actually produce a small incremental improvement each time.) For example: Let's say we're talking about a standard digital audio signal... in PCM format... on a coaxial cable. It is quite true that signal is likely to acquire some interesting clock jitter of assorted types. And, if you re-clock it with an old-style phase-locked loop, you will reduce but not entirely eliminate all of those flaws. Each PLL creates a "new clock", which is actually "locked" to the original clock, and so is affected by it to a degree. Therefore, each PLL stage acts as a sort of filter to "reduce the impurities in the clock as it passes through". Even if you use multiple staged PLLs, because each one uses a clock derived from the previous stage, some of the original error "leaks through". (Each PLL reduces the jitter by some number of dB, which is different for different frequencies, and for different types of jitter.) However, unfortunately, each PLL stage also introduces its own small quantity of new errors. Now, let's look at a modern ASRC (asynchronous sample rate converter)... An ASRC does some very cool stuff, and the internal workings are remarkably complex, but underneath it all, it basically works as a sort of super-duper-PLL... The bottom line is that, while an ARCS reduces jitter a lot, and much better than a PLL, it doesn't entirely remove it. (The normal reduction, quoted by Analog Devices for the AD1896 we use, is something over 40 dB of reduction across the audio band. Paraphrased loosely, they claim that the difference between the output of the ASRC, and an equivalent theoretical perfect signal, is below about -130 dB.) HOWEVER, the situation is rather different when you're talking about an Ethernet to USB bridge. In that sort of device, the actual data is received, stored in a buffer, then retrieved from the buffer, and recreated - WITH AN ENTIRELY NEW CLOCK. (IN theory, you could attach a hard drive to that interface, send the audio to it via Ethernet today, and play it back tomorrow, or next week.) The new clock should be entirely independent of the original Ethernet clock - so there should be no opportunity for any "fingerprint" to "leak through". (He is entirely correct. The clock used by Ethernet switches is far from "audio quality" - which is why it isn't used for the audio signal.) It is possible that noise, or even the Ethernet clock itself, could somehow affect the circuitry inside the converter. And, if so, it could cause flaws in the newly generated output data clock. (And, if so, it certainly would be an engineering problem, and require an engineering solutoion, which an optical connection might well be part of...) However, a more appropriate analogy would be of a really loud printer in the next room causing you to make errors when re-typing the data. KeithL : Thanks for your patient and carefully worded response. Personally, I do not have enough expertise to adjudicate in this debate, but I do see a lot of convergence between what you have said, and the positions John S. is developing. Still, it seems to me that you have a a little bit more confidence in the corrective work of "good clocking" than John does, although there might yet be more points of disagreement between you, some of which escape me at the moment. Now regarding this particular issue of clocking, this other post of John S. might be interesting, since it seems to address some of the objections you have raised, if only partially... At any rate, I still think a direct conversation between you and John S, about this subject would be productive and fascinating. I for one would learn quite a bit from it, even though it still strikes me as something that is unlikely, and I do not know exactly why. With that said, here is the post I was just referring to: audiophilestyle.com/forums/topic/55217-sonore-opticalrendu/?do=findComment&comment=963768All the optical does is block leakage, it doesn't get rid of clocking issues at all (it can actually make them worse). The fact that it is optical does not automatically apply some universal quantum time scheme that mystically aligns edges perfectly, If you send in a pulse, then another that is 50ns apart, then another at 51ns, then another at 49, that difference gets preserved at the receiver, the optical does not magically force all of them to be exactly 50ns. The raw data coming out of the optical receiver goes into a chip that rebuilds the Ethernet signal using its own local clock, that is done with flip flops inside the chip, these flop flops behave just like any other flip flops, again no magic here. I was trying to avoid re-iterating what I have said before on this, but it looks like I'm going to have to do it anyway. So how come this reclocking with a new clock is not perfect? As edges from the input stream go into a circuit each and every one of those edges creates a current pulse on the power and ground network inside the chip and on the board. The timing of that pulse is exactly related to the timing of the input data. The timing of the input data is directly related to the jitter on the clock producing the stream. This noise on the PG network changes the threshold voltage of anything receiving data inside the chip, especially the local clock going into the chip. This means the phase noise spectrum of the data coming in gets overlayed on top of the phase noise spectrum of the local clock. It's attenuated from what it is in the source box, but it is definitely still there. THAT is how phase noise gets from one device to the next, EVEN over optical connections. If you look at this in a system containing all uniformly bad clocks, you don't particularly see this, since they are all bad to begin with. BUT when you go from a bad to a very good clock you can definitely see this contamination of the really good clock by the overlaying of the bad clock. This is really hard to directly measure because most of the effect is happening inside the flip flop chip itself. You CAN see the effect on the data coming out of the flip flop. This process happens all the way down the chain, Ethernet to USB, USB into DAC box, and inside the DAC chips themselves, finally winding up on the analog out. Wherever reclocking is happening, how strong this overlay is depends primarily on the impedance of the power and ground network, both on boards and inside chips. A lower impedance PG network produces lower clock overlay, higher PG impedance give stronger overlay. This is something that is difficult to find out about a particular chip, the impedance of the PG network is NEVER listed in the data sheets! I have somewhat of an advantage here having spent 33 years in the semiconductor industry, spending a lot of time designing PG networks in chips, I have some insight into which chips look like good candidates for low impedance PG networks. On a side note, because Ethernet and USB are packet systems the receiving circuit CAN use a completely separate clock, the frequency just has to be close enough to handle the small number of bits in the packet. If it is a little to slow or too fast the difference is made up in the dead time between packets. To reiterate none of this has ANYTHING to do with accurately reading bits, this is assumed. It IS all about high jitter on network clocks working its way down through reclockings to the DAC chips and hence to audio outs. All the work done on DACs in recent years has cleaned up the signals so dramatically that these effects are getting to be audible in many systems. John S.
|
|
|
Post by vortecjr on Jun 8, 2019 6:05:26 GMT -5
I love how we go from unlikely to solve a problem to I can see what John is talking out once he explains it. At least that shows you are open minded:) I have the pleasure of talking to John on a regular basis and he is a super guy. John has designed for large companies, he has done a ton of research, and has conducted a ton of experiments. John's resume is very impressive and you might want to listen to him when he takes the time to explain things. Understand that John will not waste his time on fiber media converter or even a Rendu unless he feels there is a problem to solve.
|
|
klinemj
Emo VIPs
Honorary Emofest Scribe
Posts: 14,746
|
Post by klinemj on Jun 8, 2019 12:47:52 GMT -5
And of course we know from history on topics like this that Keith would rather debate the technical details than give the device in question a listen...
Mark
|
|
novisnick
EmoPhile
CEO Secret Monoblock Society
Posts: 27,223
|
Post by novisnick on Jun 8, 2019 14:38:10 GMT -5
And of course we know from history on topics like this that Keith would rather debate the technical details than give the device in question a listen... Mark
|
|
KeithL
Administrator
Posts: 9,941
|
Post by KeithL on Jun 8, 2019 22:04:23 GMT -5
To be honest I have a lot of things going on... So I really have little time to listen to products that are designed to solve problems I don't have. If I used an Ethernet-to-USB bridge, and was having problems with it that seemed related to jitter or noise, I might seriously consider auditioning a Rendu. Likewise, if I was shopping for a purpose-built Ethernet-to-USB bridge in its price range, I would seriously consider it. However, as it so happens, I don't agree with the designer's philosophy of using "multiple steps of isolation" as the best possible solution. It makes a lot of sense to provide the best possible isolation between the USB input and the DAC chip itself... Because THAT essentially establishes a sort of line of demarkation between "network traffic" and "audio". Therefore, if I were designing this sort of system, I would expend all my effort at getting that right, and ignore all the network stuff before that. (Strictly speaking, the PCM data that enters the DAC chip itself is still contained in data frames, but there's nothing you can do about that.) And, yes, if I were having that sort of problems in my system, that is the sort of solution I would look for to solve them. (A high quality USB-to-S/PDIF converter with good isolation - which isolates the digital traffic going into the DAC from everything before it.) This in no way suggests that the Rendu doesn't work just fine, or that the solution the designer chose is not going to produce an excellent result. It's just not how I would so it. (In engineering, there are often many different, and sometimes equally effective, solutions to a given problem.) However, just to respond to klinemj... There is a lot of equipment out there that I haven't had the time to listen to. (I could provide you with a very long list of digital audio products which I haven't had the time or opportunity to audition.) From everything I've heard so far, it seems likely that the Rendu is quite capable of delivering a proper digital audio signal. However, taking that as a given, and given that I've already heard a proper digital audio signal delivered by other gear, I see little room for a significant difference. (Unlike a DAC the Rendu doesn't "touch" the analog audio at all.) I wouldn't rule out the possibility that there might be minute differences... but I can't imagine they would rise to what I would consider significant. Now, obviously, YMMV there. And, yes, writing this post required a LOT less time and effort than hooking up a new piece of gear... Then trying to decide whether tiny differences I think I hear really are or are not there... Or whether I'm imagining them because of the mood I happen to be in... (And I'm most surely too lazy to set up a proper double-blind listening test.) And of course we know from history on topics like this that Keith would rather debate the technical details than give the device in question a listen... Mark
|
|
klinemj
Emo VIPs
Honorary Emofest Scribe
Posts: 14,746
|
Post by klinemj on Jun 9, 2019 6:15:17 GMT -5
To be honest I have a lot of things going on... So I really have little time to listen to products that are designed to solve problems I don't have. ... And, yes, writing this post required a LOT less time and effort than hooking up a new piece of gear... Then trying to decide whether tiny differences... On the topic of the rendu/SoTM products, you have spent considerable time telling us all the reasons they won't make our system sound better. My guess is that the total of your posts on the topic would exceed 5 pages on their own. It took me about 15 minutes to hook up a microRendu and say "wow that sounds a LOT better". Consider this: I had an in-going bias that it would not sound better than my LH Labs Geek Pulse X Infinity fed via the same USB. I didn't think I had anything "Wrong" with my system that needed fixed. The Geek has already soundly beat the best Emotiva makes/made: DC-1 or USB input to XMC-1's DAC. I was immediately surprised to hear a big difference. By your own words - if I take them quite literally, that would have to mean something is wrong with the DC-1 or the XMC-1 (in the way it handles the digital signal from input to the DAC chips). Others have similar stories here on the lounge. So, you can take all the time you want posting thesis after thesis on why they should not sound better. Or, you could give them a listen and say, "uh, hey...Dan and Lonnie, check this out...can we figure out how to make something sound this good?" Your choice. Mark
|
|
|
Post by vortecjr on Jun 10, 2019 21:04:19 GMT -5
On the topic of the rendu/SoTM products, you have spent considerable time telling us all the reasons they won't make our system sound better. My guess is that the total of your posts on the topic would exceed 5 pages on their own. It took me about 15 minutes to hook up a microRendu and say "wow that sounds a LOT better". Consider this: I had an in-going bias that it would not sound better than my LH Labs Geek Pulse X Infinity fed via the same USB. I didn't think I had anything "Wrong" with my system that needed fixed. The Geek has already soundly beat the best Emotiva makes/made: DC-1 or USB input to XMC-1's DAC. I was immediately surprised to hear a big difference. By your own words - if I take them quite literally, that would have to mean something is wrong with the DC-1 or the XMC-1 (in the way it handles the digital signal from input to the DAC chips). Others have similar stories here on the lounge. So, you can take all the time you want posting thesis after thesis on why they should not sound better. Or, you could give them a listen and say, "uh, hey...Dan and Lonnie, check this out...can we figure out how to make something sound this good?" Your choice. Mark Kieth previously stated, "As far as I know, neither the XMC-1 nor the RMC-1 has much in the way of galvanic isolation - and not on the USB input." That is a pretty significant problem to solve IMHO. So if you can feed a XMC-1 or RMC-1 with an opticalRendu which is designed with low noise USB power, low noise on board power regulation, low noise regulators, and 100% galvanic isolation I think you will be thrilled with the results:)
|
|