|
Post by yves on Oct 8, 2015 16:28:40 GMT -5
Remember that almost every modern DAC unit relies on heavy upsampling, in order to both factually and measurably improve the *accuracy* of the analog output of the DAC unit. That has nothing to do with taking a 44.1/16 file and converting it to 192/24 and doing *nothing else* and claiming it now sounds better. Remember that files don't make sounds. Speakers do.
|
|
|
Post by geebo on Oct 8, 2015 16:54:56 GMT -5
That has nothing to do with taking a 44.1/16 file and converting it to 192/24 and doing *nothing else* and claiming it now sounds better. Remember that files don't make sounds. Speakers do. Oh, now that just makes perfect sense...
|
|
|
Post by garbulky on Oct 8, 2015 17:06:58 GMT -5
yves that was a great description of the yggy. I am surprised to hear that a cheaper unit held its own next to it. But the saber DACs have really impressed me so far. Thank you for sharing your thoughts. I am looking for the end game DAC. And one thing that I am itnerested in is the upgradabiltiy of the DAC itself. This is a company that has so far delivered on their upgradability promise both in terms of going offerring analog stage upgrades to literal multi bit dac upgrades. It would be interesting in seeing how they would eventually upgrade the yggy. I hope it will eventually get HDMI capability simply because I think the SPDIF output is slowly phasing out.
|
|
|
Post by monkumonku on Oct 8, 2015 18:12:35 GMT -5
Admittedly I am not well versed in this technology but in reading the white paper you referenced, there appear to be two components: upsampling from 48 to 96, and introducing what they call an "apodizing filter." It is this latter item that supposedly improves the recording. My question would be if this filter can be applied without upsampling. If so, then it is not really a function of higher resolution, but the filter itself that changes the file. Even if upsampling is required, they are not really getting something from nothing. What they are doing is removing something to make it nothing. I think there is a substantial difference between these two conditions. Perhaps the paper that is still easiest to read and understand for someone with limited knowledge regarding this topic is the one linked below. www.ayre.com/pdf/Ayre_MP_White_Paper.pdfTo summarize, to be able to use an anti-alias filter that has a slower roll-off, you need to increase the sampling frequency because if you don't increase it, then audible aliasing artifacts will result from using this slower roll-off filter. The slower roll-off is what makes it technically possible for "ringing" (i.e., both pre-ringing and post-ringing) to be significantly reduced. By replacing Linear Phase filters with Minimum Phase filters, pre-ringing can be reduced to almost zero, albeit at the expense of adding more post-ringing (...and at the expense of introducing another artifact, called phase distortion). By combining Minimum Phase filter behavior *and* slower roll-off filter behavior, well... I think you get the picture. That said, increasing the sampling frequency *deteriorates* accuracy of the individual sampled values. It means there exists an optimum tradeoff between this type of deterioration and the improvement that results from the slower roll-off. However, due to how a Sigma Delta Modulator works (it being inherently noise-shaped), an additional improvement can be obtained from the use of ultra high sampling rates. This also helps to explain why, for example, the SABRE ES9018 chip uses (after upconverting everything to 32-bit first) 8 × upsampling. While what you wrote may be true, that's still your apples to my oranges. The issue is does the simple act of upsampling a file improve the sound. Your above explanation indicates it does but that involves doing things in the process that are in addition to a simple upsampling. Now maybe a company like HD Tracks does that or maybe they don't, but what Mark Waldrep was talking about were instances in which companies simply upsample something then slap an HD or hi-res label on the result.
|
|
|
Post by garym on Oct 8, 2015 18:59:26 GMT -5
One example of #4 is the remastered versions of the Grateful Dead studio albums. Even though the original master tapes of many of the albums were not of especially good quality, during the remastering process all sorts of repairs and alterations were performed on some of them. Special processing was applied to actually fix speed variations and dropouts present on some of the original master tapes, and the mixing was re-done. So, in that particular case, the "high-res remasters" are actually quite different, and of arguably much better audio quality, than even the original master tapes (they sound very different and I, for one, find the difference to be an improvement). Oh, I agree that if some technology has been devised since the original tracks were recorded to "fix" errors or defects in those tracks, then a new re-master may sound better. But the improvement will not be due to a higher bit depth or sampling rate. And I can't imagine what fix could bring out audio information not present on the tape (unless some info was discarded on the original master because it could not be separated from noise). It can be better, but not because of the higher sampling rate.
|
|
|
Post by yves on Oct 8, 2015 19:36:33 GMT -5
Remember that files don't make sounds. Speakers do. Oh, now that just makes perfect sense... It does. Because the difference between a 16bit 44.1kHz file and an upsampled version of this same file is very real in that it can be measured at the analog outputs of a DAC unit. The human-audible effect, although not necessarily always significant nor necessarily always bigger than zero, depends on a fairly huge variety of factors some of which are assumed to be reasonably understood, are very poorly understood, are not at all understood, or remain fully undiscovered to date. These factors may include (or be related to) the presence, the characteristics, and the magnitude, or abscence, of countless different types of errors, such as artifacts caused by the filter used in the ADC stage that produced the original file, the filter used by the upsampler in question, the filter used for 44.1kHz input in the DAC stage, and the filter used for the higher than 44.1kHz input in this same DAC stage. Most modern DACs and ADCs are using not just one, but multiple, complex filters. Which DAC filter is in use during operation of the DAC also affects DAC performance in areas of the DAC other than filter performance. On top of that, I can take a speaker that has certain, even gross defects and I can make subtle changes in the digital domain of what's feeding that chain, and we hear it very clearly because it's on a different dimension than all the other errors the system makes (i.e., it's separate). An unimagined difference that, when measured, looks to be negligibly small, can have a profound impact on how we perceive sounds, especially how we perceive (certain specific types of) music, and how we emotionally respond to music.
|
|
|
Post by Chuck Elliot on Oct 8, 2015 19:51:15 GMT -5
A return to the old DDD marker would be nice where it stands for Source-Mix-Media.
Capital "D" could denote a minimum of 96k/24 while "d" could denote something less.
You could even end up with something like DDA for vinyl!
Or dDD for something up-sampled.
|
|
|
Post by monkumonku on Oct 8, 2015 20:01:51 GMT -5
A return to the old DDD marker would be nice where it stands for Source-Mix-Media. Capital "D" could denote a minimum of 96k/24 while "d" could denote something less. You could even end up with something like DDA for vinyl! Or dDD for something up-sampled. How about DUH (Digital Upsampled Highres)
|
|
|
Post by geebo on Oct 8, 2015 20:08:49 GMT -5
Oh, now that just makes perfect sense... It does. Because the difference between a 16bit 44.1kHz file and an upsampled version of this same file is very real in that it can be measured at the analog outputs of a DAC unit. The human-audible effect, although not necessarily always significant nor necessarily always bigger than zero, depends on a fairly huge variety of factors some of which are assumed to be reasonably understood, are very poorly understood, are not at all understood, or remain fully undiscovered to date. These factors may include (or be related to) the presence, the characteristics, and the magnitude, or abscence, of countless different types of errors, such as artifacts caused by the filter used in the ADC stage that produced the original file, the filter used by the upsampler in question, the filter used for 44.1kHz input in the DAC stage, and the filter used for the higher than 44.1kHz input in this same DAC stage. Most modern DACs and ADCs are using not just one, but multiple, complex filters. Which DAC filter is in use during operation of the DAC also affects DAC performance in areas of the DAC other than filter performance. On top of that, I can take a speaker that has certain, even gross defects and I can make subtle changes in the digital domain of what's feeding that chain, and we hear it very clearly because it's on a different dimension than all the other errors the system makes (i.e., it's separate). An unimagined difference that, when measured, looks to be negligibly small, can have a profound impact on how we perceive sounds, especially how we perceive (certain specific types of) music, and how we emotionally respond to music. And all that is somehow an argument that upconverting, and *nothing else*, makes a 44.1/16 file sound better?
|
|
|
Post by yves on Oct 9, 2015 3:46:27 GMT -5
It does. Because the difference between a 16bit 44.1kHz file and an upsampled version of this same file is very real in that it can be measured at the analog outputs of a DAC unit. The human-audible effect, although not necessarily always significant nor necessarily always bigger than zero, depends on a fairly huge variety of factors some of which are assumed to be reasonably understood, are very poorly understood, are not at all understood, or remain fully undiscovered to date. These factors may include (or be related to) the presence, the characteristics, and the magnitude, or abscence, of countless different types of errors, such as artifacts caused by the filter used in the ADC stage that produced the original file, the filter used by the upsampler in question, the filter used for 44.1kHz input in the DAC stage, and the filter used for the higher than 44.1kHz input in this same DAC stage. Most modern DACs and ADCs are using not just one, but multiple, complex filters. Which DAC filter is in use during operation of the DAC also affects DAC performance in areas of the DAC other than filter performance. On top of that, I can take a speaker that has certain, even gross defects and I can make subtle changes in the digital domain of what's feeding that chain, and we hear it very clearly because it's on a different dimension than all the other errors the system makes (i.e., it's separate). An unimagined difference that, when measured, looks to be negligibly small, can have a profound impact on how we perceive sounds, especially how we perceive (certain specific types of) music, and how we emotionally respond to music. And all that is somehow an argument that upconverting, and *nothing else*, makes a 44.1/16 file sound better? Like I said, most (but certainly not all) modern DAC units already upconvert anyway in the first place. But I agree that slapping a Hi Res logo onto an uprezzed CD is about as lame as grabbing a CD master and slapping that onto a vinyl record, or claiming that audiophiles' ears actually *like* the fact the 24/96 digital download of Keith Richard's latest album sores a louzy DR6 in the TT Dynamic Range Meter. To quote Peter Goossens (see the video I linked earlier in the thread),
|
|
KeithL
Administrator
Posts: 10,273
|
Post by KeithL on Oct 9, 2015 10:36:27 GMT -5
Admittedly I am not well versed in this technology but in reading the white paper you referenced, there appear to be two components: upsampling from 48 to 96, and introducing what they call an "apodizing filter." It is this latter item that supposedly improves the recording. My question would be if this filter can be applied without upsampling. If so, then it is not really a function of higher resolution, but the filter itself that changes the file. Even if upsampling is required, they are not really getting something from nothing. What they are doing is removing something to make it nothing. I think there is a substantial difference between these two conditions. You're quite right - in fact what they're doing is removing something based on the assumption that it probably wasn't there to begin with, and so probably was added as a processing artifact along the way... The relationship between the processing they're doing and upsampling the audio to 96k is somewhat subtle What their "apodizing filter" does is to modify the digital audio signal in such a way that pre-ringing (which was presumably introduced during the A/D process) is mathematically reduced and/or "converted" into post ringing. The value of doing this is based on two assumptions. First, that any pre-ringing present is the result of a processing anomaly and doesn't really belong there. And, second, that pre-ringing is audibly annoying, while post ringing is less so, and so the conversion yields an audible perceived improvement. Both of these assumptions are widely accepted as being true - at least most of the time. Their claim is that, due to the type of mathematical process involved, and the characteristics of the resulting signal, the alteration must accompany upsampling to 96k. In other words, according to them, you can only perform the modification as part of an upsampling process, and the benefits would be negated if you were to then downsample the result back to 44k or 48k.
|
|
KeithL
Administrator
Posts: 10,273
|
Post by KeithL on Oct 9, 2015 11:14:58 GMT -5
That is indeed true.... however, the important thing to remember is that upsampling in and of itself does NOT improve anything. What's happening is that, when you convert a digital audio signal back into analog, you MUST apply a filter to the resulting audio to remove all energy above the Nyquist frequency. With a DAC converting a 44k CD audio signal, with no oversampling, in order to avoid compromising the quality of the analog audio output, this would require a filter that was flat to 20 kHz, but had attenuation of around 80 dB at 24 kHz. As it turns out, a filter that meets those requirements is impractical to build and produce - and trying to do so always involves unacceptable compromises. What upsampling does is to use some mathematical trickery to increase the sample rate, and so the Nyquist frequency. What you very much need to understand is that oversampling does NOT improve audio quality; what it does is to alter the signal in a way that is "quality neutral", but which then makes it simpler to design a filter which DOESN'T degrade the signal quality. Oversampling for this purpose can be done explicitly, or it can occur as part of the conversion process itself. Virtually all modern DACs use some form of oversampling (including Sabre DACs, delta-sigma DACs, and the Schiit Yggdrasil); the exception being DACs specifically billed as "non-oversampling DACs". In this context, since the oversampling process is occurring inside the DAC, you can't differentiate what audible differences, if any, it introduces; it's simply part of what defines the sound character of the particular DAC. By upsampling the digital audio (in addition to what occurs inside the DAC) you are introducing another step where the sound quality may be altered - and so another opportunity to choose an option whose sound you prefer - but you need to remember that any change you hear can only be due to a loss of accuracy. (Of you're a purist, then you realize that, if the glass in a windows is clear, then you can't even see it; if you choose between different panes of glass because they look different, then you must accept that the choices aren't actually clear glass. Of course, if you're not a total purist, then it's one more place where you can introduce options and choices.) And, of course, since we're talking about upsampling a "standard res" file, none of this has anything whatsoever to do with the idea that a high-res file could contain extra detail that "won't fit" in a non-high-res file - assuming that there's information there to begin with and that we will be able to hear it. (In other words you need to differentiate between enabling accuracy - which is possible, and creating accuracy - which is not.) Ah, but if you took a picture of an Adams print with a modern high resolution high dynamic range digital camera would that in itself make the picture better? Of course it wouldn't. What Ansel did was more like post processing. Remember that almost every modern DAC unit relies on heavy upsampling, in order to both factually and measurably improve the *accuracy* of the analog output of the DAC unit.
|
|
KeithL
Administrator
Posts: 10,273
|
Post by KeithL on Oct 9, 2015 11:53:49 GMT -5
Actually, I don't think I agree with you there.... I think a system like that would be oversimplifying the situation - and playing into the "problems" that a lot of people already have with "high-res" audio - namely that you don't get enough information to make an informed decision. The problem is that people have come to believe (with considerable prompting) that anything that says "high-res" must be good quality. Now, while actually specifying the sample rate something was mastered at is some more information, I still don't think it's enough. I have a pocket digital recorder, with two built-in microphones, which cost me $199, and which can record at 24/96k. Therefore, by a simple classification system like you propose, I could reasonably label a file that I recorded in my back yard using my $199 recorder, edited in Audacity, and offered for free download as "DDD" - which would give my homemade recording the same "quality rating" as Deutsche Grammophon's latest release. I think this might lead to more confusion rather than less. Personally, I think it's "safer" to force people to read a paragraph about how a given selection was recorded and mastered, than it is to have them assume that a set of a few letters tells the entire story. (I think providing oversimplified information encourages people to believe that the actual reality is simple - and so to avoid learning about the complexities involved. I'd rather see a "standard" where every online seller of files or CDs offered a link to a full page that explained all of the details about how a particular selection was recorded, mastered, mixed, and delivered. Anyone would be free to read as much, or as little, as they liked - and to purchase, or not purchase, music that didn't include a complete provenance. The fact that most people buy their stuff online simply means that we no longer need be concerned with whether that information would fit on the record or CD sleeve - we have as much space as we need to provide complete information. (However, I might agree that, without claiming that it offers "the whole picture", such basic information should perhaps be legally required under "truth in advertising" laws.) A return to the old DDD marker would be nice where it stands for Source-Mix-Media. Capital "D" could denote a minimum of 96k/24 while "d" could denote something less. You could even end up with something like DDA for vinyl! Or dDD for something up-sampled.
|
|
|
Post by monkumonku on Oct 9, 2015 12:41:38 GMT -5
yves that was a great description of the yggy. I am surprised to hear that a cheaper unit held its own next to it. But the saber DACs have really impressed me so far. Thank you for sharing your thoughts. I am looking for the end game DAC. And one thing that I am itnerested in is the upgradabiltiy of the DAC itself. This is a company that has so far delivered on their upgradability promise both in terms of going offerring analog stage upgrades to literal multi bit dac upgrades. It would be interesting in seeing how they would eventually upgrade the yggy. I hope it will eventually get HDMI capability simply because I think the SPDIF output is slowly phasing out. I really think you ought to take the plunge and order an iggy and check it out yourself. Oh... yggy... I thought you were talking about an iggy.
|
|
|
Post by Chuck Elliot on Oct 9, 2015 19:31:44 GMT -5
Simplification is exactly what is needed here and it does give you a great deal of information. You can always have a crappy recording as your example illustrates, but the validity still holds. In a day in age that the average attention span is that of a gnat, do you really think that Joe Average would even read it? I know we all would, but we’re not average. Plus, how do you standardize a paragraph as part of a logo? I actually wanted to include the average DR of an album too such as DDD 16, but thought it too much. Actually, I don't think I agree with you there.... I think a system like that would be oversimplifying the situation - and playing into the "problems" that a lot of people already have with "high-res" audio - namely that you don't get enough information to make an informed decision. The problem is that people have come to believe (with considerable prompting) that anything that says "high-res" must be good quality. Now, while actually specifying the sample rate something was mastered at is some more information, I still don't think it's enough. I have a pocket digital recorder, with two built-in microphones, which cost me $199, and which can record at 24/96k. Therefore, by a simple classification system like you propose, I could reasonably label a file that I recorded in my back yard using my $199 recorder, edited in Audacity, and offered for free download as "DDD" - which would give my homemade recording the same "quality rating" as Deutsche Grammophon's latest release. I think this might lead to more confusion rather than less. Personally, I think it's "safer" to force people to read a paragraph about how a given selection was recorded and mastered, than it is to have them assume that a set of a few letters tells the entire story. (I think providing oversimplified information encourages people to believe that the actual reality is simple - and so to avoid learning about the complexities involved. I'd rather see a "standard" where every online seller of files or CDs offered a link to a full page that explained all of the details about how a particular selection was recorded, mastered, mixed, and delivered. Anyone would be free to read as much, or as little, as they liked - and to purchase, or not purchase, music that didn't include a complete provenance. The fact that most people buy their stuff online simply means that we no longer need be concerned with whether that information would fit on the record or CD sleeve - we have as much space as we need to provide complete information. (However, I might agree that, without claiming that it offers "the whole picture", such basic information should perhaps be legally required under "truth in advertising" laws.) A return to the old DDD marker would be nice where it stands for Source-Mix-Media. Capital "D" could denote a minimum of 96k/24 while "d" could denote something less. You could even end up with something like DDA for vinyl! Or dDD for something up-sampled.
|
|
|
Post by yves on Oct 9, 2015 21:55:05 GMT -5
That is indeed true.... however, the important thing to remember is that upsampling in and of itself does NOT improve anything. What's happening is that, when you convert a digital audio signal back into analog, you MUST apply a filter to the resulting audio to remove all energy above the Nyquist frequency. With a DAC converting a 44k CD audio signal, with no oversampling, in order to avoid compromising the quality of the analog audio output, this would require a filter that was flat to 20 kHz, but had attenuation of around 80 dB at 24 kHz. As it turns out, a filter that meets those requirements is impractical to build and produce - and trying to do so always involves unacceptable compromises. What upsampling does is to use some mathematical trickery to increase the sample rate, and so the Nyquist frequency. What you very much need to understand is that oversampling does NOT improve audio quality; what it does is to alter the signal in a way that is "quality neutral", but which then makes it simpler to design a filter which DOESN'T degrade the signal quality. Oversampling for this purpose can be done explicitly, or it can occur as part of the conversion process itself. Virtually all modern DACs use some form of oversampling (including Sabre DACs, delta-sigma DACs, and the Schiit Yggdrasil); the exception being DACs specifically billed as "non-oversampling DACs". In this context, since the oversampling process is occurring inside the DAC, you can't differentiate what audible differences, if any, it introduces; it's simply part of what defines the sound character of the particular DAC. By upsampling the digital audio (in addition to what occurs inside the DAC) you are introducing another step where the sound quality may be altered - and so another opportunity to choose an option whose sound you prefer - but you need to remember that any change you hear can only be due to a loss of accuracy. (Of you're a purist, then you realize that, if the glass in a windows is clear, then you can't even see it; if you choose between different panes of glass because they look different, then you must accept that the choices aren't actually clear glass. Of course, if you're not a total purist, then it's one more place where you can introduce options and choices.) And, of course, since we're talking about upsampling a "standard res" file, none of this has anything whatsoever to do with the idea that a high-res file could contain extra detail that "won't fit" in a non-high-res file - assuming that there's information there to begin with and that we will be able to hear it. (In other words you need to differentiate between enabling accuracy - which is possible, and creating accuracy - which is not.) Remember that almost every modern DAC unit relies on heavy upsampling, in order to both factually and measurably improve the *accuracy* of the analog output of the DAC unit. Only caveat, the human hearing system is non linear in a lot of ways. The part that's audible to us needs to be accurate, whereas the other part does not. It means that, first, we need to study human hearing. Only then can we start having a meaningful conversation about accuracy. Because if the measured magnitude of a certain type of error in a signal is large, then if this error is completely inaudible to humans, the only conclusion that can be logical is that under this specific set of circumstances it simply doesn't matter the fact the signal is highly inaccurate. Whereas, if another type of error in this same signal measures small, then if we can hear it despite the fact it's small, it *does* matter despite the fact it's small. So yeah, obviously you can *talk* about things like clear glass. But the actual reality is that humans don't *hear* sounds that way. It just isn't an accurate way of describing how the human hearing system works. I know, this is all very counter intuitive. However, once you have been reading a few chapters into the book titled "Psychoacoustics: Facts and Models" by Hugo Fastl and Eberhard Zwicker, that's when you will start to discover that our ears really don't treat accuracy the same way engineers are trained to. You see, pre-ringing has a very strong tendency to be much more *audible* than post-ringing. This is not the same thing as personal *preference* about sound, or about being a "purist". It's called masking, and electronic measurements lose their meaning, or value if you ignore the auditory neuroscience that is used to describe masking. So I have to firmly disagree with you regarding signal *quality* here. The accuracy, or quality of information that is reproduced, or reconstrued from digital data depends not only on the interpolation method used, the actual data itself, and that which the information describes, but on the goal that we need to achieve by using that information. The goal, which, if we can help it, isn't to let the time-smearing effect of pre-ringing artifacts muck up a percussion instrument's sharp transients. That's the good hypothesis. The good news is that some flaws in the ADC can be corrected by changing the data that came out of it. Pontification on whether this correction should happen before, after, or during transfer of the data to the DAC unit is quite sadly missing the point entirely.
|
|
|
Post by audiobill on Oct 10, 2015 8:50:49 GMT -5
I think it's fun to observe how many worry about .0000001% jitter or distortion specs while listening to distortion laden, compressed and overdriven recordings of rawk bands from decades ago.
Just check the "What are you listening to now" thread here.
|
|
|
Post by garym on Oct 10, 2015 10:21:04 GMT -5
I think it's fun to observe how many worry about .0000001% jitter or distortion specs while listening to distortion laden, compressed and overdriven recordings of rawk bands from decades ago. Just check the "What are you listening to now" thread here. A great point. The distortion introduced by a decent audio system pales besides that introduced (some deliberately, such as fuzz-tone guitars)) by electronically amplified instruments
|
|
|
Post by garym on Oct 10, 2015 10:24:39 GMT -5
That is indeed true.... however, the important thing to remember is that upsampling in and of itself does NOT improve anything. What's happening is that, when you convert a digital audio signal back into analog, you MUST apply a filter to the resulting audio to remove all energy above the Nyquist frequency. With a DAC converting a 44k CD audio signal, with no oversampling, in order to avoid compromising the quality of the analog audio output, this would require a filter that was flat to 20 kHz, but had attenuation of around 80 dB at 24 kHz. As it turns out, a filter that meets those requirements is impractical to build and produce - and trying to do so always involves unacceptable compromises. What upsampling does is to use some mathematical trickery to increase the sample rate, and so the Nyquist frequency. What you very much need to understand is that oversampling does NOT improve audio quality; what it does is to alter the signal in a way that is "quality neutral", but which then makes it simpler to design a filter which DOESN'T degrade the signal quality. Oversampling for this purpose can be done explicitly, or it can occur as part of the conversion process itself. Virtually all modern DACs use some form of oversampling (including Sabre DACs, delta-sigma DACs, and the Schiit Yggdrasil); the exception being DACs specifically billed as "non-oversampling DACs". Informative post, Keith.
|
|
|
Post by yves on Oct 11, 2015 18:34:12 GMT -5
I think it's fun to observe how many worry about .0000001% jitter or distortion specs while listening to distortion laden, compressed and overdriven recordings of rawk bands from decades ago. Just check the "What are you listening to now" thread here. That's why I listen mostly to vinyl instead of measurements. Having to suffer a lot less from earbleed makes for a very excellent reason to ignore measurements preachings and listen to those rawk bands from decades ago through vinyl instead.
|
|