Jitter is one of those subjects that is widely misunderstood.
Jitter is simply a term that describes variations in clock timing (it applies to lots of stuff besides digital audio).
First off, digital audio
FILES cannot have Jitter; just as lyrics printed on a sheet of paper can't be "off key".
The possibility of jitter only appears when you play those files.... which involves reading the samples, at the proper times, and so requires a clock.
(The reason we want to avoid jitter is that, if we convert the correct incoming data, but not at exactly at the proper times, we will end up with distortion in our audio output.)
USB is a packet-based system.... which means that the data appears in little chunks.... packed in a series of packets.... at which point there is no clock for the audio data itself.
With the original way of doing USB audio, the sending device was assumed to be feeding the audio at about the right "speed".
So, to avoid running out of data if you got ahead of the feed, or having packets pile up if the feed got ahead of you, the clock was "reconstructed" at the receiver based on the rate at which the packets are received. Think of an old movie projector - the kind that employs film.
Each frame is moved into position, and stopped, the shutter opens to display it, the shutter closes, and the next frame moves into position.
However, the film itself moves at a constant rate.... which is both mechanically practical, and is required for the audio track.
There's a little loop of loose film in between the stages of the projector that allows the film to move smoothly going in and coming out, while still moving in small steps at certain points along the way.
The PLL is the part that makes sure that the film moving smoothly into and out of the projector matches the speed at which the little step-feeder moves the frames along in the middle.
With the old system, the sending device controlled the overall "speed of the feed"..... so the clock you used to play the audio data had to be "locked" to that feed.
A PLL (a phase-locked-loop) is the electronic equivalent of a flywheel.... sort of.
You can use a PLL to generate a clock, and "lock" that clock to some incoming signal.... notably one of a very different frequency.
This was important because you had to make a new clock that ticked
FASTER than the rate of the packets....
For example, you needed a nice steady clock that "ticked" 20 times for each packet that came in....
Imagine the PLL, like a little motor, spinning rapidly, then comparing 1/20 of its speed to the incoming speed, and making moment to moment corrections to make sure they stay matched.
Since the PLL makes sure that 1/20 of its speed matches the speed of the incoming signal, its output speed will always be exactly 20x the speed of the incoming signal... which is what we're looking for.
The catch with PLLs is that you trade off accuracy for locking speed.
You can have a PLL that locks really well, but takes several
MINUTES to lock; or you can have one that locks very quickly, but doesn't lock very well; it's always a tradeoff... and there are fancy ways around that tradeoff.
A much better system would be to generate a brand new perfect clock at the DAC itself.
Instead of accepting the input signal, and locking to it, you would just create a really good clock all on your own, and then, to avoid any gaps,
TELL the sending device when you need more data.
(Basically you're treating your USB sending device as if it was simply delivering a file.... and playing that file with
YOUR clock.)
This is how asynchronous USB works.
(This is a slight oversimplification, and there are limitations, but it pretty much really works this way.)
Note that PLLs, which are used to enable you to lock onto a clock created by the sending device, are
NOT actually part of this process.
(That's also an oversimplification... because you might use a PLL to lock onto the packets so you can get the data out of them... before throwing away the clock generated by the PLL, and replacing it with your own clock for playback purposes.)
Note that this all
DEPENDS on being able to
TELL the sending device when to send more data and when to hold off for a while.
The receiver has a small buffer, which holds a small amount of data, to cover minor variations, but only minor ones.
While no system is perfect, with this sort of system, the quality of the clock is almost entirely determined by the quality of the new clock created by the receiving device (the clock in the asynch USB interface)
Of course, being a type of USB interface, this
ONLY works for USB.
An ASRC (asynchronous sample rate converter) is an interesting device.
They were designed for communications applications... and the way they're used for audio is more of a "side effect" of their operation than the intended goal.
I'm going to provide a
VERY simplified description here.....
You provide the ASRC with a high quality clock at the sample rate you wish it to
OUTPUT.
You feed an input signal into the ASRC at pretty much any sample rate you like (over a certain range).
The ASRC "locks onto" both the new clock you provided and the incoming signal.
It does this internally using the equivalent of a pair of super-fast super-powerful PLLs - implemented in a DSP - with a massive amount of computing power.
What the ASRC basically does is to figure out what the incoming signal "should be if it had a perfect clock".... (which is sort of another way of saying that it
IGNORES any jitter that might be present).
It then calculates how to convert the incoming signal to one that is equivalent - but at the sample rate of the output clock.
Conceptually, the ASRC takes the incoming signal, "converts it into analog", then "converts it back into digital at a new sample rate"...... in practice it's all done with math... lots of math.
And, yes, part of that math is the mathematical equivalent of a "three-stage dynamically adaptive PLL type locking circuit" (I made that up but it's a fair description).
The "good part" of all this is that, as part of the process, te ASRC "digitally filters out all the effects of jitter".
And, yes, it does this all
REALLY WELL...... by spec, on the one we use in the DC-1, the "errors between what I just described and reality" are somewhere below - 130 dB.
While, as someone noted, nothing is absolute, a typical ASRC reduces the audible effects of jitter by at least 40 dB... ranging much higher at certain frequencies.
(There are no more precise specs for that sort of thing because the details of what it's doing, and how to interpret them, are really complex... Analog Devices "simplified conceptual description" is several pages long.)
The main huge benefit of an ASRC is that it works for
ANY digital audio data stream - and not just USB.
And, to answer someone else's question, using an ASRC along with an asynch USB input is sort of redundant - because either, by itself, will typically reduce jitter far below audible levels.
However, technically speaking, using both would give you a tiny bit more reduction than either individually.
Let me summarize all this without the heavy technical details.....
1) A well-designed asynch USB input should be almost totally immune to jitter from the source device.
2) Since virtually all modern DACs have asynch USB inputs, which have so many advantages over the older type (isochronous), there is no reason
NOT to have an asynch USB input.
(Other than a few people who have odd audiophile ideas, and some really low-end DACs, pretty much all modern DACs have asynch USB inputs.
And, to be totally honest, any high-end DAC that's old enough to have one of the older non-asynch USB inputs with a PLL is probably in serious need of an upgrade for several reasons.)
3) An ASRC will work on ALL inputs... not just USB.
An ASRC may or may not make a lot of difference - depending on your source.
An ASRC should be largely redundant with the asynch USB input (but will also work for all the other inputs)
4) Either one, when properly implemented, should reduce the audible effects of jitter enough that, if you hear differences, they're probably due to something else
I should mention specifically that the audible effects of jitter are quite subtle.
Even the effects of rather large amounts of jitter aren't at all obvious, and the effects of small amounts of jitter are
REALLY subtle.
The audible effects of jitter are most often described as "a slight blurring or softening of the sound stage".
If you hear something dramatically bad... odds are that it
ISN'T jitter you're hearing.
High levels of jitter result in noise sidebands that will show up as a high noise floor and high THD.
Also note that "times change".
Back when PLLs were common, a device with 2 nS of jitter was considered to be "good" (Behringer's hardware sample rate converter bragged about having only 2 nS of jitter).
2 nS is 2000 pS.... and, by today's standards, even most mediocre equipment is at least 10x better than that.
(So jitter really isn't a major issue
TODAY.)
The USB inputs on almost all of our gear, including the RMC-1, are asynch USB.
The XMC-1 and RMC-1 also have an ASRC between their audio DSP stages - which reduces the jitter significantly on all digital audio (which is probably one reason why they both sound so good).
However, in something as complex as a pre/pro, there's a
LOT of other stuff going on, so I wouldn't even venture a guess as to how much the ASRC is contributing to the overall sound.
At a very high level of abstraction.... all that really counts is that you end up with an accurate analog signal, low in all types of distortion, including those that may be caused by excessive jitter.
If you have a lot of distortion, then the cause of it becomes important; however, if you
DON'T have high levels of distortion, then the details become unimportant.
So, for example, if a DAC has a very low noise floor, and very low distortion, then
IT CAN'T HAVE SIGNIFICANT JITTER PROBLEMS (if it did they would show up in the measurements).
We may not know whether it has remarkably low levels of jitter, or whether it has lots of jitter, but features a design that isn't especially sensitive to jitter.
All that matters is that we are
NOT hearing any of the problems that an excessive amount of jitter
MIGHT cause... so it's all good.
It is also important to note that, while jitter itself can accumulate along the signal path, in the end there is only one clock at any given point.
Furthermore, with a DAC, all that really counts is the amount of jitter present at the point where the digital audio is actually converted into analog.
So, to address your comment.... yes, it
REALLY IS "as simple as putting an accurate clock before the DAC".
The
ONLY way jitter at any step in the process before that could matter
at all would be if it was so extreme that it caused the data to become unreadable - and so allowed data errors to occur.
(And this is simply not something that happens very often.)
Actually, short of Digital Data simply not arriving on time, Asynchronous Re-Clocking (ASRC) can completely eliminate jitter. Look at USB as an example: the data arrives in Packets, representing multiple samples in whatever format you're sending. That Digital Data needs to get put into a Memory Buffer and then transfered to the DAC based on a completely new Clock.
Casey
Yes ; came across a 2 stage implantation using PLL and PWM reclocking to reject accumulated asynchronous jitter so its not as simple as just putting an accurate clock before the DAC apparently .
www.freepatentsonline.com/8742841.htmlAlso curious as to Emo's stance on asynchronous reclocking in conjunction with asynchronous sample rate conversion for the RMC1 32bit dacs . If Keith hasn't opined on it yet ?
hifiduino.blogspot.com.au/2009/06/asynchronous-re-clocker-vs-asynchronous.html