First of all I want to say that I love this section of the Lounge. It helps greatly. My question is, which is the better of the two technologies, up-sampling or oversampling. I read an article lately that was in favor of up-sampling. Thanks again.
It think there are other factors at play, in the case of oversampling, what over-sampling factor is used?
For example, if you take an analogue source with a highest frequency of 20kHz and oversample it by a 10x factor you would be sampling at 200kHz (or 192kHz) and have a very nice, accurate digital reproduction.
If you took a standard CD sample of 44.1kHz, of the same source and then upsampled it to 200kHz (192kHz being the closest standard number) you would have a file the same size but I would think the oversampled one would be much more true to the original.
I think a little context is needed, they can likely be used for different things and the one used may depend on what you are trying to achieve.
Or have I missed something?
1 x Yamaha CX-A5100 (pre/pro) 7 x UPA-1 1 x XPA-5
9 x ERM-6.3 (Main channels + front height) 2 x in-celing (rear overhead) 2 x Rythmik F15HP (front) 2 x Rythmik F15 (rear)
Screen Excellence AT 115" (16:9) Optoma UHZ65 laser projector
Oversampling is a technique used during an A-D process that helps reduce errors because more data is captured during the quantization (digital conversion) process. Up-sampling means you start with a signal that is digital already and add bytes to it in order to convert it to a different bit rate. In general, oversampling has a direct impact on the quality of the digitized signal while up-sampling does not, you end up with the same signal you started with only encoded at a different bit rate.
“Seeing is better than being blind, even when seeing hurts.” ― Abraham H. Maslow, Toward a Psychology of Being
Nemesis.ie is correct, over sampling will always provide the best results.
if you sample at a lower rate, which means to space out your samples in time you loose some information. you can NEVER get that information back. So if you then up sample your doing nothing more than adding more bits in from the already down sampled source which will not recover the lost samples.
Will call this our over sampled version (sample it at 1sec intervals) (voltage amplitudes) |..........*....* |........*........*..............* |......*........... * ........*.... * |....*...............*......*...........* |..*........................................* |*_______________________.........*________________ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 (time in seconds)
Then you sample at 2 second intervals the same song (our under sampled version which we want to "upsample later") |...............* |...........* |.......................*.....*......* |.....* |...........................................* |*_________________________....*________________ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 (time in seconds)
See how its sorta the same picture but missing samples (or information), its not quite as smooth in its transitions. If you attempt to up sample the second graph there you must determine an algorithm to fill in the missing samples. So you may do something like, look at a sample then its proceeding sample if its more, split the difference and add a sample in, if its less, split the difference and add a sample.
This is a simple example and may provide satisfactory results but as the amplitudes are switching faster or the sampling rate increases the ability to reconstruct the recording by up sampling becomes more and more difficult.
Also the term "oversampled" is tricky in its meaning. There is no actual way to sample too much per say. There are rules of thumb i.e. nyquist rate in which its determined that if you sample at twice the bandwidth of a signal you should be able to accurately reconstruct that signal. So if you sampled at say 3*Bandwidth you have effectively "oversampled"
Wanted to add on to the above, everything is over sampled in order to make the low pass filter required to reconstruct the signal more realistic and cheaper. So for example, the telephone line has a bandwidth of 3.2Khz and they sample at 8Khz to give themselves some margin. The typical rule of thumb is 10%-20% more.
Sorry for the long winded response, my engineer side takes over from time to time
Funny thing, my digital communications class was all about sampling and rates last night.
If you want to know more in detail or see the notes from that lecture i would be happy to scan them and email them over to you.
I was wandering past, and sensed the need for some serious clarification here...
Oversampling and upsampling are BOTH ways of changing the sample rate of a digital audio file (or stream) to a higher sample rate. When this is done inside the DAC itself (and usually in even multiples; ie 8x) it is usually called oversampling. When it's done somewhere else before the DAC (could be in a player, or a computer, or a separate box), it's usually called upsampling. Usually upsampling is done to some sample rate that is not an even multiple, but is instead a standard value (96k or 192k). Neither process is more accurate, or "better", and often both are used in one device.
Neither process, no matter how well it is done, can "create information", so the resulting digital audio CANNOT be more accurate than the original. The extra samples are interpolated from the original data, and, if the math is done well, they will not adversely affect the accuracy, but they cannot improve it. Oversampling does NOT mean "upsampling too much", and either process, if done correctly, is equally accurate. (Oversampling, to an even multiple, uses easier math, so is easier to do.)
So, then, why bother to do it?
The answer is simple. The highest frequency that a particular digital signal can contain is limited by the sample frequency (specifically, the limit is 1/2 the sample frequency); this is called the Nyquist Frequency. So, for a CD, with a sample rate of 44,100 , the highest frequency it can contain is 22,050 Hz (actually slightly lower). But, even more importantly, the conversion process results in all sorts of nasty noise and "byproducts" at frequencies above that 22,050 Hz. Without going into a lot of math.... you MUST use a high-cut filter to filter out EVERYTHING above that 22,050 in order to get back your original signal (and to prevent a lot of nasty noise and distortion).
Unfortunately, designing and building this filter can be a real problem. Audio extends up to 20 kHz, so we need a filter that passes everything up to 20 kHz without messing it up, but cuts off EVERYTHING above 20 kHz. Ideally, it should be down about 100 dB at 22 kHz. This is referred to as "a brick wall filter", and is impossible to actually make. In real life, you're stuck with a compromise that cuts off most of the stuff past 20 kHz, yet doesn't do too much damage to the audio band. [These filter compromises were why the early CD players often didn't sound very good.]
Now, let's try upsampling our signal to 192k. By upsampling, we have "magically" changed our filter requirement to one that is easy to implement. The audio information stays the same but, because we have increased the sample rate, the Nyquist frequency is much higher. Instead of needing a brick wall filter, now all we need is a filter that passes everything up to 20 kHz without messing it up (that part doesn't change), yet is down a lot at our NEW Nyquist frequency (96 kHz). This filter is a lot easier to design (it's actually possible and practical), and we can even build it with cheaper components and still get excellent results.
There you have it.....
The short answer is that upsampling and oversampling don't do anything to improve the audio quality; what they do is make it possible to design the (required) filter circuitry in such a way that it doesn't make a mess of the converted audio.... What they do is to make it possible for the DAC to do its job properly (which is virtually impossible to do without upsampling). And, finally, since most modern DACs do oversampling internally (it's referred to as an oversampling filter), upsampling outside the DAC as well is really more or less redundant.
Keiths answer should be a "sticky" on the Internet as far as this question goes
XPA-200 USP-1 ADSL990's/Magneplanar MMG's Carver C1 Pre Denon AVR-2801 Sources: Itunes on MAC - Jriver PC TT - Lovely Oak - 1950's Rek-O-Kut Rondine Jr L-34 w/JICO SAS M55E TT - Dual 1257 as my ROK has no 45 RPM Reel to Reel - Teac A-3340 Alesis - Masterlink Alesis ADATs 20/48's Various forms of mid END DAC/ADC/soundcards/digital mixers
This question came up recently on another forum. Below are my answers. Oversampling does NOT automatically improve things; it should reduce in-band quantization noise, but the higher sampling rate often means linearity (distortion) is worse. You can use either technique with any data converter (ADC or DAC) but most people think of delta-sigma DACs (and ADCs) since they use oversampling by design.
As noted, it is a pretty hand-waving, high-level description and (as usual) the devil's in the details.
HTH - Don (yes, I have designed data converters for a living, but at much higher rates than audio)
The Nyquist criteria says you must sample >2x the highest signal frequency (the Nyquist frequency) to be able to reconstruct the signal. Oversampling is sampling at more than that, typically at least a factor of two or more. For example, if we assume the highest signal frequency is 20 kHz, then the CD sampling rate of 44.1 kS/s meets the Nyquist criteria and allows capture of signal up to (but not including) 22.05 kHz. 88.2 kS/s is oversampled by a factor of two, and so forth.
Oversampling provides margin for the filters needed to band-limit the signal and you can improve the signal-to-noise ratio (SNR). By doubling (or more) the sampling rate, quantization noise (the noise generated when you convert from analog to digital samples) is spread over a larger frequency range. The noise is determined by the number of conversion bits, so if you keep the number of bits and the frequency bandwidth the same, you gain 3 dB in SNR by filtering out half the noise (that is, the noise above Nyquist, say above 20 kHz).
Delta-sigma and other data converters take advantage of oversampling by using high oversampling ratios, noise shaping that "pushes" the conversion noise past (higher than) the signal band, and then using high-order filters to reduce the noise to achieve much higher in-band SNR.
Upsampling takes data sampled at one rate and samples it (the same data) again (resamples) at a higher rate. You can theoretically gain SNR as in oversampling, but you must somehow "fill-in" or generate new signal samples between the actual samples. If the samples you have are 1 and 3, then if you upsample by two an interpolation algorithm can generate a new intermediate sample of 2. The catch is the algorithm cannot know exactly what the original signal was like before it was sampled, so the prediction (interpolated sample) may be wrong. How to design an optimal interpolation filter is the topic of many classes, texts, and proprietary algorithms.
Interpolation between two known samples when there is no higher-frequency content possible (oversampling) is not in general the same as predictive interpolation applied when the sampling rate is raised (upsampling). Some use the term "extrapolation" when upsampling to indicate it is potentially adding signals that do not lie between the two original samples. (Two is to make it easier to see; it is in generally a number of samples before and after the current sample that are used to determine the new sample value.) When you oversample the input signal bandwidth does not change. When you upsample you open the door to adding frequency (and amplitude) content beyond what was in the original signal. That can lead to things like intersample clipping that has been discussed here (and elsewhere).
Upsampling can be performed without increasing the output bandwidth, of course.
Whenever you play a CD at higher than CD rate and resolution. Play it back at 24/96 and the algorithm may just zero out the lower bits or may try to fill them in based on what it thinks the signal would have been, and ditto for frequency content. Since Nyquist is 48 kHz instead of 22.05 kHz the algorithm may try to "add back" high-frequency content it predicts was lost in the original recording. You could (as you say) prevent (constrain) the algorithm, or add a filter to roll off the extra HF content, but that is not a general case IME/IMO/etc. Certainly I have read plenty of marketing talk about the advantages of upsampling your CDs into the latest greatest hi-rez format.
"After silence, that which best expresses the inexpressible, is music" - Aldous Huxley
Uh, I do not see what a transformer (which has its own pros and cons) has to do with sampling rates... If you have a delta-sigma DAC, like the vast majority of audio DACs these days, it is doing oversampling internally anyway (there are a few esoteric architures that are delta-sigma and do not oversample, but I have never seen them in an audio/LF DAC).
"After silence, that which best expresses the inexpressible, is music" - Aldous Huxley
Oh, and trash that delta - sigma in favor of a good multibit......
How to get truly great sound quality from a digital source _ In order to do really good digital you need to do really good analog! I don't care how many bits are used or samples per second - most manufacturers are obsessed with digital specs and don't put much effort into all the analog circuitry that is involved in a top level DAC. Then again it’s the analog that our ears are listening to – everything in the digital domain must be converted to analog for our ears and brains to understand and this is a BIG part of the DAC. In order to enjoy digital music on a single ended 300B system for example, their are a number of factors in the architecture that we consider of utmost importance. Let's look at our own Dac 4.1 to start.
First, in our opinion, this needs to be Non Oversampling Resistor Ladder Architecture (R-2R) in order to be true to the digital information residing on your disc. Second the digital to analog conversion section needs a superb power supply to provide exact DC voltages. We do this with our on-board DAC power supply and regulation board.
The small analog signal that is created on our DAC board uses a current output from the DAC chip along with a high quality Audio Note tantalum resistor to create the output voltage. This signal is then fed into a nickel core 1:1 transformer ( I/V transformer) that allows this signal to be replicated on the analog line board. The analogue board is a tube line stage with a transformer coupling associated with it. Our M2 power supply (which is both tube rectified and tube regulated) provides the HT voltage for this board.
The design of the output transformers using C-Cores is also critical to replicate all the frequencies required in the analog signal and be able to drive this signal to the next device in the chain – either an integrated amplifier or a pre-amplifier. This overall Audio Note design philosophy has made our DAC’s very popular amongst demanding audiophiles who want to hear ultimate in digital reproduction with no fatigue! Check out the DAC 4.1 and be prepared to enjoy your CD’s & digital music in an entirely new way.
"No excuses, no compromises, no black boxes"
Conrad Johnson ET-5 preamplifier Conrad Johnson Premier 140 amps (two) Custom VTA SP-14 preamplifier PS Audio Directstream DAC Bob Latino/VTA M-125 monoblocks Thorens TD-316 TT Grado Silver Cartridge EmotivaXPS-1 Phono preamp Sennheiser HD-600 Headphones Dynaudio Gemini speakers PBN Montana EPS speakers Magnepan 3.6/R speakers Mac Mini as server