Jitter is like the impurities in water........
If there's a lot of jitter it can be clearly audible.
If there's very little, then it will either be barely audible, or not at all audible.
Also, like impurities, there can be different types of jitter (and, if they're all small enough, you still shouldn't hear any of them).
And like impurities in water, some people seem to notice jitter more than others, and we can assume that some people notice particular types of jitter more than others.
When you filter water, you can
NEVER have "100.0% pure water" - but you can get 99.9% pure water pretty easily, and you can pay more for filters that buy you "more nines".
Likewise, you can always design for less jitter, but there's no such thing as none.
In the old days, pretty high levels of jitter were common..... for example, one piece of professional equipment was proud that it had only "2 ns jitter" - which is 2,000 ps.
(Jitter is measured in
PICO-seconds..... 1 nanosecond is 1000 picoseconds.)
For comparison, a modern S/PDIF input section will typically introduce between 50-100 ps of jitter (that will be the minimum you'll see on any signal passing through it).
The amounts of jitter in most modern equipment are very low..... so low that they are very difficult to measure.
And, if you were to buy a test tube of 99.999999% pure water, simply opening the cap would ruin it - because there are worse impurities in the air.
Likewise, if you buy a super-expensive super-low-jitter clock chip, then run the output through a half inch of wire, the jitter will be a lot worse at the other end of the wire.
This is why the overall jitter in something like a DAC depends as much on things like component layout as it does on the clock chip you use.
In general, very low amounts of jitter are also almost impossible to measure directly, and so they are almost always measured indirectly.
We look at the spectrum of a test tone..... and, because jitter will cause extra frequency components to appear, we can tell from the extra junk we see how much jitter must have caused them.
(If we see extra junk, and it was caused by jitter, it will show up as a particular sort of distortion; so, if we don't see that distortion, then there must not have been much jitter.)
Of course, if the distortion is extremely low, then it sort of doesn't matter what
DIDN'T cause it, right.... the point is that the signal is very clean.
So the conclusion works sort of in reverse.....
If we see a lot of distortion, and it seems to be the sort usually caused by jitter, then we have a problem, and conclude that jitter is at least partly to blame.
And, if the distortion is very low, we conclude both that we don't have any problems, and that the jitter must be very low.
In the old days, data was generally created with its own clock, and the same clock was used by everything along the way.
Now, jitter is basically variations in the clock timing.
When a signal was fed into an old-style DAC, the clock was "filtered" using things like phase-locked loops.
These act very much as a flywheel does to a motor.... they allow the clock to pass through, while applying a sort of "electronic inertia", to smooth the motion.
When the data reached the DAC chip, at the point where it was converted, the amount of distortion wold depend on how perfect the timing on the clock was.
The more "un-smooth-ness", the more distortion; the better job the filter did of removing the variations, then better the output was.
Obviously, how smooth the signal ends up will depend on both how smooth it was to begin with, and how effective your filters are.
And, in those days, adding more filters, or stacking up more stages of filtering, usually improved matters (most high-end DACs had two stages of PLL).
In those days, USB was especially bad, because the timing of the USB packets was especially "rough", and so was especially difficult to filter well.
Most modern equipment works rather differently.
Instead of taking the original clock and doing our best to smooth it with a filter, we throw away the original clock entirely and replace it with our own.
This is what an asynchronous USB input does....
And it's what an ASRC does....
(S/PDIF inputs by themselves do
NOT re-clock the data - and so depend on the clock it came in with.)
What this means is that, since each of those devices replaces the clock entirely, stacking them one after the other no longer makes sense (because the only clock that counts is the
LAST ONE in the chain.)
And, yes, both the USB input section and the ASRC each have their own clocks.
(And we don't bother to use terms like "precision clock" because those terms are so overused as to be meaningless... all modern clocks are pretty precise.)
For example, if you use the USB input on the DC-1, with the ASRC
DISABLED, the clock you'll be using is the one in the DC-1's USB input circuitry.
And, if you enable the ASRC, when it processes the signal, it will throw that one away and replace it with its own clock, so the DAC will be using the ASRC's clock.
And, if you buy an Eitr, it will use its clock instead of the one in our USB input section at that step in the signal path
And, if the ASRC is enabled, when the signal gets to it, it will throw away that clock and use its own anyway.
But, if you used an Eitr, and the ASRC is disabled, then the Eitr's clock will be the one that's sent on to the DAC.
Neither the jitter specs of the Gungnir's USB input nor the Eitr are published... so there's no way to say for sure where the improvement lies there.
Likewise, while it's fair to say things like "we filter the USB ground and power", the actual specs about how well you do that are complex, and probably wouldn't help you figure out what to expect anyway.
Although, to be honest, if there's such an obvious improvement, then that suggests that there was something wrong before that needed fixing....
(Remember that there is no "endless path to improvement"; there is perfection, and all you can do is get closer to it; and, if an upgrade gets you closer, then what you had before must have been further away, right?)
Note that we don't bother to try and take those really hard measurements either....
(If we wanted to buy that really expensive equipment to measure it directly we'd have to raise the price of your DC-1 $100 to pay for it.)
We apply the same logic as everyone else....
If there was a problem, it would be audible, and it would show up in the distortion specs....
Specifically it shows up in the distortion spectrum graph.... the one with the big peak in the middle and the "grass" around it.
If there was significant jitter, the grass around the central peak would be higher, and there would be more of it.
(And you can tell all sorts of interesting details by analyzing "where the high grass is".)
Likewise, a high noise floor can be caused by all sorts of issues, which can include USB power noise.
(I think I can honestly say that I've never actually heard the noise floor on a DC-1... so I can't imagine noticing if it were any lower
.)
However, to answer your question......
- A lower noise floor
could be due to better isolation from USB power or ground noise if that was a problem before
- "Better clarity"
could be due to lower jitter, which could result from using better clocks, or from better component and board layout
(I believe they still use the same C-Media USB interface chip as before - but there are all sorts of potential variations in the circuitry you put around it)
- Since bits really are just bits.... and we assume they were all correct to begin with... there's no way they could have improved the data itself
- Since the bits are the same, and the bits aren't being changed, there's no way the frequency response or dynamic range could actually be different
(although the lower noise floor could make the dynamic range or frequency response seem better)
I don't normally post effusive accolades about audio gear, but when I do it's because it is to me something really special. I use Schiit DACs and in my headphone system, it's a Gungnir Multibit DAC driving a Mjolnir 2 amplifier to my Mr. Speakers Ether C headphones. I really like the sound of this rig, driven via USB directly from my music server. Recently Schiit released what they are calling their Gen 5 USB board, claiming that it "solves" all audio issues that plague USB audio. My Gungnir is upgradable to Gen 5, but I was both skeptical and unwilling to part with it for a couple weeks for the upgrade. Then Schiit released a product called Eitr, which is a USB to S/PDIF converter using their Gen 5 technology. OK, I thought, I'll try this.
This is perhaps the biggest upgrade I've ever experienced in a digital USB-based audio system. The difference in running USB from my server to Eitr then S/PDIF to Gungnir was one of those "holy crap" moments. Much lower noise floor, much better clarity and apparent frequency response, better dynamic range, better everything. It sounds like I bought a new set of headphones. Either the USB input on the Gungnir sucks, or the Eitr is worth 100 times it's $179 price point.
Highly recommended if you use USB audio and your DAC can accept S/PDIF input. I am bowled over.
www.schiit.com/products/eitrThat's great! I've been interested in the Eitr as well.
I'm not saying you didn't hear a difference. The USB solutions I've heard - namely on Emotiva DACs have been lack luster to me and I always preferred the non-USB solution. People roll their eyes when I say it. Bits are bits, jitter is too lwo to be audible and all that.
But since you are an engineer, I wonder what measurable component could have changed that would have yielded this difference? I believe the Schiit DAC is already asynchronous before this upgrade so the jitter must have been immeasurably low. Have you tried comparing it with SPDIF? Do you notice a difference?