|
Post by tatentoby on Aug 3, 2011 23:59:03 GMT -5
First of all I want to say that I love this section of the Lounge. It helps greatly. My question is, which is the better of the two technologies, up-sampling or oversampling. I read an article lately that was in favor of up-sampling. Thanks again.
|
|
|
Post by Nemesis.ie on Aug 4, 2011 4:37:25 GMT -5
Perhaps you can define exactly what the two mean in this context? Are these what you mean: en.wikipedia.org/wiki/Oversamplingen.wikipedia.org/wiki/UpsamplingIt think there are other factors at play, in the case of oversampling, what over-sampling factor is used? For example, if you take an analogue source with a highest frequency of 20kHz and oversample it by a 10x factor you would be sampling at 200kHz (or 192kHz) and have a very nice, accurate digital reproduction. If you took a standard CD sample of 44.1kHz, of the same source and then upsampled it to 200kHz (192kHz being the closest standard number) you would have a file the same size but I would think the oversampled one would be much more true to the original. I think a little context is needed, they can likely be used for different things and the one used may depend on what you are trying to achieve. Or have I missed something?
|
|
DYohn
Emo VIPs
Posts: 18,485
|
Post by DYohn on Aug 4, 2011 9:21:11 GMT -5
Oversampling is a technique used during an A-D process that helps reduce errors because more data is captured during the quantization (digital conversion) process. Up-sampling means you start with a signal that is digital already and add bytes to it in order to convert it to a different bit rate. In general, oversampling has a direct impact on the quality of the digitized signal while up-sampling does not, you end up with the same signal you started with only encoded at a different bit rate.
|
|
|
Post by Nemesis.ie on Aug 4, 2011 12:26:16 GMT -5
Which is another way of putting what I said I think. i.e. oversampling the original analogue is likely to produce the best result.
|
|
Chris
Minor Hero
Posts: 94
|
Post by Chris on Sept 22, 2011 16:58:29 GMT -5
Nemesis.ie is correct, over sampling will always provide the best results. if you sample at a lower rate, which means to space out your samples in time you loose some information. you can NEVER get that information back. So if you then up sample your doing nothing more than adding more bits in from the already down sampled source which will not recover the lost samples. Will call this our over sampled version (sample it at 1sec intervals) (voltage amplitudes) |..........*....* |........*........*..............* |......*........... * ........*.... * |....*...............*......*...........* |..*........................................* |*_______________________.........*________________ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 (time in seconds) Then you sample at 2 second intervals the same song (our under sampled version which we want to "upsample later") |...............* |...........* |.......................*.....*......* |.....* |...........................................* |*_________________________....*________________ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 (time in seconds) See how its sorta the same picture but missing samples (or information), its not quite as smooth in its transitions. If you attempt to up sample the second graph there you must determine an algorithm to fill in the missing samples. So you may do something like, look at a sample then its proceeding sample if its more, split the difference and add a sample in, if its less, split the difference and add a sample. This is a simple example and may provide satisfactory results but as the amplitudes are switching faster or the sampling rate increases the ability to reconstruct the recording by up sampling becomes more and more difficult. Also the term "oversampled" is tricky in its meaning. There is no actual way to sample too much per say. There are rules of thumb i.e. nyquist rate in which its determined that if you sample at twice the bandwidth of a signal you should be able to accurately reconstruct that signal. So if you sampled at say 3*Bandwidth you have effectively "oversampled" Wanted to add on to the above, everything is over sampled in order to make the low pass filter required to reconstruct the signal more realistic and cheaper. So for example, the telephone line has a bandwidth of 3.2Khz and they sample at 8Khz to give themselves some margin. The typical rule of thumb is 10%-20% more. Sorry for the long winded response, my engineer side takes over from time to time Funny thing, my digital communications class was all about sampling and rates last night. If you want to know more in detail or see the notes from that lecture i would be happy to scan them and email them over to you. -Chris
|
|
|
Post by tatentoby on Nov 29, 2011 18:49:34 GMT -5
I have been meaning to get back to this thread and thank you for your reply, it helped alot. It was the best explanation that I have heard.
|
|
KeithL
Administrator
Posts: 10,256
|
Post by KeithL on Dec 18, 2012 15:29:31 GMT -5
I was wandering past, and sensed the need for some serious clarification here...
Oversampling and upsampling are BOTH ways of changing the sample rate of a digital audio file (or stream) to a higher sample rate. When this is done inside the DAC itself (and usually in even multiples; ie 8x) it is usually called oversampling. When it's done somewhere else before the DAC (could be in a player, or a computer, or a separate box), it's usually called upsampling. Usually upsampling is done to some sample rate that is not an even multiple, but is instead a standard value (96k or 192k). Neither process is more accurate, or "better", and often both are used in one device.
Neither process, no matter how well it is done, can "create information", so the resulting digital audio CANNOT be more accurate than the original. The extra samples are interpolated from the original data, and, if the math is done well, they will not adversely affect the accuracy, but they cannot improve it. Oversampling does NOT mean "upsampling too much", and either process, if done correctly, is equally accurate. (Oversampling, to an even multiple, uses easier math, so is easier to do.)
So, then, why bother to do it?
The answer is simple. The highest frequency that a particular digital signal can contain is limited by the sample frequency (specifically, the limit is 1/2 the sample frequency); this is called the Nyquist Frequency. So, for a CD, with a sample rate of 44,100 , the highest frequency it can contain is 22,050 Hz (actually slightly lower). But, even more importantly, the conversion process results in all sorts of nasty noise and "byproducts" at frequencies above that 22,050 Hz. Without going into a lot of math.... you MUST use a high-cut filter to filter out EVERYTHING above that 22,050 in order to get back your original signal (and to prevent a lot of nasty noise and distortion).
Unfortunately, designing and building this filter can be a real problem. Audio extends up to 20 kHz, so we need a filter that passes everything up to 20 kHz without messing it up, but cuts off EVERYTHING above 20 kHz. Ideally, it should be down about 100 dB at 22 kHz. This is referred to as "a brick wall filter", and is impossible to actually make. In real life, you're stuck with a compromise that cuts off most of the stuff past 20 kHz, yet doesn't do too much damage to the audio band. [These filter compromises were why the early CD players often didn't sound very good.]
Now, let's try upsampling our signal to 192k. By upsampling, we have "magically" changed our filter requirement to one that is easy to implement. The audio information stays the same but, because we have increased the sample rate, the Nyquist frequency is much higher. Instead of needing a brick wall filter, now all we need is a filter that passes everything up to 20 kHz without messing it up (that part doesn't change), yet is down a lot at our NEW Nyquist frequency (96 kHz). This filter is a lot easier to design (it's actually possible and practical), and we can even build it with cheaper components and still get excellent results.
There you have it.....
The short answer is that upsampling and oversampling don't do anything to improve the audio quality; what they do is make it possible to design the (required) filter circuitry in such a way that it doesn't make a mess of the converted audio.... What they do is to make it possible for the DAC to do its job properly (which is virtually impossible to do without upsampling). And, finally, since most modern DACs do oversampling internally (it's referred to as an oversampling filter), upsampling outside the DAC as well is really more or less redundant.
|
|
|
Post by paradigmE on Dec 18, 2012 17:15:02 GMT -5
Keiths answer should be a "sticky" on the Internet as far as this question goes
|
|
|
Post by donh50 on Jun 27, 2018 18:04:30 GMT -5
This question came up recently on another forum. Below are my answers. Oversampling does NOT automatically improve things; it should reduce in-band quantization noise, but the higher sampling rate often means linearity (distortion) is worse. You can use either technique with any data converter (ADC or DAC) but most people think of delta-sigma DACs (and ADCs) since they use oversampling by design.
As noted, it is a pretty hand-waving, high-level description and (as usual) the devil's in the details.
HTH - Don (yes, I have designed data converters for a living, but at much higher rates than audio)
Handwaving follows.
The Nyquist criteria says you must sample >2x the highest signal frequency (the Nyquist frequency) to be able to reconstruct the signal. Oversampling is sampling at more than that, typically at least a factor of two or more. For example, if we assume the highest signal frequency is 20 kHz, then the CD sampling rate of 44.1 kS/s meets the Nyquist criteria and allows capture of signal up to (but not including) 22.05 kHz. 88.2 kS/s is oversampled by a factor of two, and so forth.
Oversampling provides margin for the filters needed to band-limit the signal and you can improve the signal-to-noise ratio (SNR). By doubling (or more) the sampling rate, quantization noise (the noise generated when you convert from analog to digital samples) is spread over a larger frequency range. The noise is determined by the number of conversion bits, so if you keep the number of bits and the frequency bandwidth the same, you gain 3 dB in SNR by filtering out half the noise (that is, the noise above Nyquist, say above 20 kHz).
Delta-sigma and other data converters take advantage of oversampling by using high oversampling ratios, noise shaping that "pushes" the conversion noise past (higher than) the signal band, and then using high-order filters to reduce the noise to achieve much higher in-band SNR.
Upsampling takes data sampled at one rate and samples it (the same data) again (resamples) at a higher rate. You can theoretically gain SNR as in oversampling, but you must somehow "fill-in" or generate new signal samples between the actual samples. If the samples you have are 1 and 3, then if you upsample by two an interpolation algorithm can generate a new intermediate sample of 2. The catch is the algorithm cannot know exactly what the original signal was like before it was sampled, so the prediction (interpolated sample) may be wrong. How to design an optimal interpolation filter is the topic of many classes, texts, and proprietary algorithms.
Interpolation between two known samples when there is no higher-frequency content possible (oversampling) is not in general the same as predictive interpolation applied when the sampling rate is raised (upsampling). Some use the term "extrapolation" when upsampling to indicate it is potentially adding signals that do not lie between the two original samples. (Two is to make it easier to see; it is in generally a number of samples before and after the current sample that are used to determine the new sample value.) When you oversample the input signal bandwidth does not change. When you upsample you open the door to adding frequency (and amplitude) content beyond what was in the original signal. That can lead to things like intersample clipping that has been discussed here (and elsewhere).
Upsampling can be performed without increasing the output bandwidth, of course.
Whenever you play a CD at higher than CD rate and resolution. Play it back at 24/96 and the algorithm may just zero out the lower bits or may try to fill them in based on what it thinks the signal would have been, and ditto for frequency content. Since Nyquist is 48 kHz instead of 22.05 kHz the algorithm may try to "add back" high-frequency content it predicts was lost in the original recording. You could (as you say) prevent (constrain) the algorithm, or add a filter to roll off the extra HF content, but that is not a general case IME/IMO/etc. Certainly I have read plenty of marketing talk about the advantages of upsampling your CDs into the latest greatest hi-rez format.
|
|
|
Post by audiobill on Jun 28, 2018 17:44:46 GMT -5
No up or over sampling at all needed with well-engineered interstage transformer coupling.
Much more natural.
|
|
|
Post by donh50 on Jun 28, 2018 19:21:43 GMT -5
Uh, I do not see what a transformer (which has its own pros and cons) has to do with sampling rates... If you have a delta-sigma DAC, like the vast majority of audio DACs these days, it is doing oversampling internally anyway (there are a few esoteric architures that are delta-sigma and do not oversample, but I have never seen them in an audio/LF DAC).
|
|
|
Post by audiobill on Jun 28, 2018 19:58:38 GMT -5
Oh, and trash that delta - sigma in favor of a good multibit......
How to get truly great sound quality from a digital source _ In order to do really good digital you need to do really good analog! I don't care how many bits are used or samples per second - most manufacturers are obsessed with digital specs and don't put much effort into all the analog circuitry that is involved in a top level DAC. Then again it’s the analog that our ears are listening to – everything in the digital domain must be converted to analog for our ears and brains to understand and this is a BIG part of the DAC. In order to enjoy digital music on a single ended 300B system for example, their are a number of factors in the architecture that we consider of utmost importance. Let's look at our own Dac 4.1 to start.
First, in our opinion, this needs to be Non Oversampling Resistor Ladder Architecture (R-2R) in order to be true to the digital information residing on your disc. Second the digital to analog conversion section needs a superb power supply to provide exact DC voltages. We do this with our on-board DAC power supply and regulation board.
The small analog signal that is created on our DAC board uses a current output from the DAC chip along with a high quality Audio Note tantalum resistor to create the output voltage. This signal is then fed into a nickel core 1:1 transformer ( I/V transformer) that allows this signal to be replicated on the analog line board. The analogue board is a tube line stage with a transformer coupling associated with it. Our M2 power supply (which is both tube rectified and tube regulated) provides the HT voltage for this board.
The design of the output transformers using C-Cores is also critical to replicate all the frequencies required in the analog signal and be able to drive this signal to the next device in the chain – either an integrated amplifier or a pre-amplifier. This overall Audio Note design philosophy has made our DAC’s very popular amongst demanding audiophiles who want to hear ultimate in digital reproduction with no fatigue! Check out the DAC 4.1 and be prepared to enjoy your CD’s & digital music in an entirely new way.
|
|
|
Post by leonski on Oct 25, 2021 21:23:30 GMT -5
I was wandering past, and sensed the need for some serious clarification here... Oversampling and upsampling are BOTH ways of changing the sample rate of a digital audio file (or stream) to a higher sample rate. When this is done inside the DAC itself (and usually in even multiples; ie 8x) it is usually called oversampling. When it's done somewhere else before the DAC (could be in a player, or a computer, or a separate box), it's usually called upsampling. Usually upsampling is done to some sample rate that is not an even multiple, but is instead a standard value (96k or 192k). Neither process is more accurate, or "better", and often both are used in one device. Neither process, no matter how well it is done, can "create information", so the resulting digital audio CANNOT be more accurate than the original. The extra samples are interpolated from the original data, and, if the math is done well, they will not adversely affect the accuracy, but they cannot improve it. Oversampling does NOT mean "upsampling too much", and either process, if done correctly, is equally accurate. (Oversampling, to an even multiple, uses easier math, so is easier to do.) So, then, why bother to do it? The answer is simple. The highest frequency that a particular digital signal can contain is limited by the sample frequency (specifically, the limit is 1/2 the sample frequency); this is called the Nyquist Frequency. So, for a CD, with a sample rate of 44,100 , the highest frequency it can contain is 22,050 Hz (actually slightly lower). But, even more importantly, the conversion process results in all sorts of nasty noise and "byproducts" at frequencies above that 22,050 Hz. Without going into a lot of math.... you MUST use a high-cut filter to filter out EVERYTHING above that 22,050 in order to get back your original signal (and to prevent a lot of nasty noise and distortion). Unfortunately, designing and building this filter can be a real problem. Audio extends up to 20 kHz, so we need a filter that passes everything up to 20 kHz without messing it up, but cuts off EVERYTHING above 20 kHz. Ideally, it should be down about 100 dB at 22 kHz. This is referred to as "a brick wall filter", and is impossible to actually make. In real life, you're stuck with a compromise that cuts off most of the stuff past 20 kHz, yet doesn't do too much damage to the audio band. [These filter compromises were why the early CD players often didn't sound very good.] Now, let's try upsampling our signal to 192k. By upsampling, we have "magically" changed our filter requirement to one that is easy to implement. The audio information stays the same but, because we have increased the sample rate, the Nyquist frequency is much higher. Instead of needing a brick wall filter, now all we need is a filter that passes everything up to 20 kHz without messing it up (that part doesn't change), yet is down a lot at our NEW Nyquist frequency (96 kHz). This filter is a lot easier to design (it's actually possible and practical), and we can even build it with cheaper components and still get excellent results. There you have it..... The short answer is that upsampling and oversampling don't do anything to improve the audio quality; what they do is make it possible to design the (required) filter circuitry in such a way that it doesn't make a mess of the converted audio.... What they do is to make it possible for the DAC to do its job properly (which is virtually impossible to do without upsampling). And, finally, since most modern DACs do oversampling internally (it's referred to as an oversampling filter), upsampling outside the DAC as well is really more or less redundant. My original Philips....Sold in the USA as Magnavox was a 4x oversampling player....so was at 176.4Khz....and had a very good sound for the day.....If I could find a laser, I could PROVE it.... It used an early version of the TD1541 chipset of ONLY 14 bits....not 16 which became the norm on gen II players. It it a toploader and I think an FD-1000 model. Stereophile was gobsmacked when they first reviewed this player....
|
|
|
Post by leonski on Oct 25, 2021 21:26:19 GMT -5
Lot of chit-chat about the Nuquist Frequency:
Test Question? What happens IF you exceed the magic '1/2 the sampling frequncy'? (ignore guard band for now)
|
|
KeithL
Administrator
Posts: 10,256
|
Post by KeithL on Oct 26, 2021 15:48:37 GMT -5
At this point we're talking about the process of turning an analog input signal into digital...
Basically, if you allow any content that extends above the Nyquist frequency to enter the A/D converter you get a phenomenon called aliasing...
In this case, this aliasing is usually described as "energy at frequencies above the Nyquist frequency are folded down around the Nyquist frequency into the audible band". For example the sample rate for audio CDs is 44k - so the Nyquist frequency is 22k... Let's say you have a tone at 27kHz (5 kHz above the Nyquist frequency) that you fail to filter out... When it is passed to the ADC, the result when the digital output of that ADC is converted back into analog will include an aliased tone 5 kHz BELOW the Nyquist frequency (17k Hz). The further your unwanted "leakage" extends above the Nyquist frequency the further the resulting aliased junk will extend below it. Music is not generally comprised of steady tones... and most of the energy in music is well below 20 kHz.
With instruments that produce a chain of harmonics the level of harmonics at very high frequencies is pretty low... However, certain instruments like cymbals produce what is essentially a burst of noise, which may extend to very high frequencies... In addition to that, depending on other factors, there may be significant noise present, even at very high frequencies... Depending on what is present, you may hear occasional "birdies", or an elevated and odd sounding noise floor, or both... And instruments like cymbals may sound odd or rough because the audio content that belongs there is being summed with unwanted stuff... (And usually this will somewhat "follow the music", which is considered to be "correlated noise", and which tends to be more audible and annoying than random noise.)
This is why you absolutely do want to apply the proper filtering to analog signals before converting them into digital. When you convert the digital audio signal back into analog a lot of extra information is created as part of the sampling and conversion process...
And this extra energy must be removed by a reconstruction filter. (Failure to do so may result in significant amounts of ultrasonic noise which can cause other equipment to distort and potentially damage your tweeters.) The "catch" is that this must be a low pass filter that has a lot of attenuation above the Nyquist frequency... but it also must avoid causing audible issues in the audible frequency range.
Without oversampling, for a CD at 44kHz, this filter would have to have a lot of attenuation above 22 kHz, while still not causing audible issues in the band below 20 kHz. This type of filter is difficult to design, expensive to build, and tends to cause other problems. However, by oversampling that digital audio signal to 96k, we raise the Nyquist frequency to 48k. So now we only need to design a filter that has lots of attenuation above 48k, but is flat to 20 kHz, which is a lot easier to do. (We usually oversample to even higher frequencies... allowing us to use an even simpler filter that is even less intrusive.)
So, even though we haven't made the signal "better", we have made it easier to design a filter that can handle it properly, and won't negatively impact the audio quality.
( Yes, I rounded the CD sample rate to 44k because I'm lazy... it's actually 44.1k with a Nyquist frequency of 22,050 Hz. )
Lot of chit-chat about the Nuquist Frequency: Test Question? What happens IF you exceed the magic '1/2 the sampling frequncy'? (ignore guard band for now)
|
|
|
Post by leonski on Oct 26, 2021 17:38:27 GMT -5
I should have Excluded you, Keith......
|
|
|
Post by routlaw on Oct 26, 2021 19:22:26 GMT -5
Oh, and trash that delta - sigma in favor of a good multibit...... How to get truly great sound quality from a digital source _ In order to do really good digital you need to do really good analog! I don't care how many bits are used or samples per second - most manufacturers are obsessed with digital specs and don't put much effort into all the analog circuitry that is involved in a top level DAC. Then again it’s the analog that our ears are listening to – everything in the digital domain must be converted to analog for our ears and brains to understand and this is a BIG part of the DAC. In order to enjoy digital music on a single ended 300B system for example, their are a number of factors in the architecture that we consider of utmost importance. Let's look at our own Dac 4.1 to start. First, in our opinion, this needs to be Non Oversampling Resistor Ladder Architecture (R-2R) in order to be true to the digital information residing on your disc. Second the digital to analog conversion section needs a superb power supply to provide exact DC voltages. We do this with our on-board DAC power supply and regulation board. The small analog signal that is created on our DAC board uses a current output from the DAC chip along with a high quality Audio Note tantalum resistor to create the output voltage. This signal is then fed into a nickel core 1:1 transformer ( I/V transformer) that allows this signal to be replicated on the analog line board. The analogue board is a tube line stage with a transformer coupling associated with it. Our M2 power supply (which is both tube rectified and tube regulated) provides the HT voltage for this board. The design of the output transformers using C-Cores is also critical to replicate all the frequencies required in the analog signal and be able to drive this signal to the next device in the chain – either an integrated amplifier or a pre-amplifier. This overall Audio Note design philosophy has made our DAC’s very popular amongst demanding audiophiles who want to hear ultimate in digital reproduction with no fatigue! Check out the DAC 4.1 and be prepared to enjoy your CD’s & digital music in an entirely new way. And yet you don't own a piece of Audio Note hifi gear, DAC or otherwise however I do remember sometime ago you owned one but sold it for another source that I am fairly certain was NOT an R2R Ladder DAC. Not trying to pick a fight here but there seems to be some inconsistencies with your philosophy. However I don't doubt for a second the Audio Note gear sounds really good.
|
|
|
Post by audiobill on Oct 26, 2021 19:27:07 GMT -5
It was great, just as the Directstream, Grace, McIntosh and many others I’ve sampled!!
And I didn’t write that quote from Audionote.
|
|
|
Post by routlaw on Oct 26, 2021 19:33:53 GMT -5
audiobill I realize you did not write that, it was obviously direct from the AN website. But again I still don't understand your POV given that you don't own any of this, other than just to be contrarian perhaps. Regardless I wish you well with what ever you end up listening to or should that be "with".
|
|
KeithL
Administrator
Posts: 10,256
|
Post by KeithL on Oct 27, 2021 10:40:51 GMT -5
Since this seems to have floated to the top again I'm going to add a few comments.... 1)
I absolutely agree with their statement that "in order to do really good digital you need to do really good analog".
In virtually all types of audio signal flow the information going in is analog - and what you end up listening to is also analog - so poor analog performance can certainly limit overall performance. 2) I have a bit of problem with their next assertion. Since a single ended triode has rather high levels of distortion I just cannot classify it as "high quality audio" - so, to me, what it delivers is "poor analog performance". (It may "sound nice" to some people - but it is most certainly going to add coloration.) 3) We now get to THEIR OPINION (their words) that non-oversampling R2R DACs do a better job of extracting the information that exists in your digital audio source. The reality is that, in a relatively ideal world, both R2R DACs and Delta-Sigma DACs would have their various advantages and disadvantages. However, in the real world, the practical design issues we face with R2R DACs virtually ensure that they will perform less well. It is difficult and expensive to design a R2R DAC that performs as well as a moderately well designed D-S DAC...
It is virtually impossible, or at least incredibly expensive, to design an R2R DAC with really high performance... (If you doubt this then try to find a price on an R2R DAC with true measured 32 bit performance.) Non-oversampling is another issue altogether (there are both oversampling and non-oversampling R2R DACs.) The purpose of oversampling is to enable practical reconstruction filters to be used at the sample rates used in many common sources - including CDs. Similar to the above, with a CD as a source, it is incredibly expensive to design and manufacture a reconstruction filter that performs well. (You need a filter that combines being virtually flat to 20 kHz, but down at least 70-80 dB at 24 kHz, while having low harmonic and phase distortion...
These numbers are defined by the sample rate of the data on a CD... the sample rate of a CD is 44.1 kHz with a Nyquist frequency of 22,050 Hz.
A filter that meets these requirement is really difficult to design with analog components... and parts with the accuracy required to actually build it are expensive and difficult to source.)
Oversampling artificially raises the Nyquist frequency... This DOES NOT add any information or improve the quality of the signal. What it does is to enable us to get really excellent performance from a simpler filter - which is both simpler to design and costs less to manufacture. (If we oversample the data from that CD to 96k, the requirements for our oversampling filter are that it is flat and accurate to 20 kHz, and down 70-80 dB at 48 kHz, which is a much less difficult design requirement.)
4) Using a transformer as an I/V conversion stage has a few real benefits... including very low electronic noise. HOWEVER, all transformers distort, and many are also sensitive to picking up external magnetic noise (often hum). To be fair, the grid on a tube provides a very high impedance load for the I/V transformer, which makes it an excellent choice in that application (although you could do the same with an FET-input op-amp). Of course, since op-amps are inherently I/V devices, the op-amp will do a more accurate job as an I/V conversion stage all by itself. 5) I'm sorry, but following this all with an additional tube output stage, and another transformer, is just an affectation. Doing so does nothing to improve performance... other than to add tube coloration... if you consider that to be an improvement.
(And. likewise, a tube rectifier simply offers poorer performance than a well designed solid state rectifier... although it is a bit more difficult to screw up.) Oh, and trash that delta - sigma in favor of a good multibit...... How to get truly great sound quality from a digital source _ In order to do really good digital you need to do really good analog! I don't care how many bits are used or samples per second - most manufacturers are obsessed with digital specs and don't put much effort into all the analog circuitry that is involved in a top level DAC. Then again it’s the analog that our ears are listening to – everything in the digital domain must be converted to analog for our ears and brains to understand and this is a BIG part of the DAC. In order to enjoy digital music on a single ended 300B system for example, their are a number of factors in the architecture that we consider of utmost importance. Let's look at our own Dac 4.1 to start. First, in our opinion, this needs to be Non Oversampling Resistor Ladder Architecture (R-2R) in order to be true to the digital information residing on your disc. Second the digital to analog conversion section needs a superb power supply to provide exact DC voltages. We do this with our on-board DAC power supply and regulation board. The small analog signal that is created on our DAC board uses a current output from the DAC chip along with a high quality Audio Note tantalum resistor to create the output voltage. This signal is then fed into a nickel core 1:1 transformer ( I/V transformer) that allows this signal to be replicated on the analog line board. The analogue board is a tube line stage with a transformer coupling associated with it. Our M2 power supply (which is both tube rectified and tube regulated) provides the HT voltage for this board. The design of the output transformers using C-Cores is also critical to replicate all the frequencies required in the analog signal and be able to drive this signal to the next device in the chain – either an integrated amplifier or a pre-amplifier. This overall Audio Note design philosophy has made our DAC’s very popular amongst demanding audiophiles who want to hear ultimate in digital reproduction with no fatigue! Check out the DAC 4.1 and be prepared to enjoy your CD’s & digital music in an entirely new way. And yet you don't own a piece of Audio Note hifi gear, DAC or otherwise however I do remember sometime ago you owned one but sold it for another source that I am fairly certain was NOT an R2R Ladder DAC. Not trying to pick a fight here but there seems to be some inconsistencies with your philosophy. However I don't doubt for a second the Audio Note gear sounds really good.
|
|