|
Post by Casey Leedom on Nov 30, 2016 12:34:52 GMT -5
Thanks Keith. Sounds like exciting times in your shop. Merry early Christmas!
Casey
|
|
KeithL
Administrator
Posts: 10,247
|
Post by KeithL on Nov 30, 2016 12:36:55 GMT -5
The short answer is that PROGRESS NEVER STOPS (or, as someone once said: "the only thing that's constant is change"). If you want to wait until all the dust settles then you're going to wait forever. If you really need the new features right bow, then you should upgrade or buy them right now. If not, then wait until there are ENOUGH new features to make the jump compelling. However, if you keep waiting for things to stop moving, then you'll be waiting forever, and you'll never get to ENJOY anything. I can tell you with a lot of confidence that, every few years, we're going to come out with a new processor. And every one will have a few cool new features that last year's model didn't have (and nobody even thought of last year). You could have held off on buying that new car until "lane change warning" became standard... But you might have been doing a lot of walking... And do you really doubt that next year's model will have some new and even cooler feature you just can't live without? Really? Personally, I suggest buying or upgrading to whatever features you need to enjoy your stuff NOW! Alright, you've forced my hand.... there is already an XPA-8 in the works! There, I've said it and I'm not sorry I did. Finally I can sleep knowing that the truth is out there!! We're jumping 6 generations in one fell swoop! Take that 🍎!! 👽 ok dan…… here is my dilemma. i will never do atmos and 7.2 is a real stretch for my wife in our living room. the back speakers have to be practically hidden. so…. should i stay with my xmc-1 and just update it or is there any benefit to going with a future xmc-2, xmc-3, xmc-4, xmc-5, xmc-6, xmc-7 or xmc-8 like improved sound, operation, noise floor etc……. as the song says……. "should i stay or should i go" tchaik………………….
|
|
|
Post by bradford on Nov 30, 2016 13:52:16 GMT -5
The distinction is sort of fuzzy....... A "DSP" generally refers to a processor specifically designed for audio (actually a "digital signal processor" could also be designed for other digital signals, like video, or even sensor data from your LASER tape measure), as compared to "a general purpose processor" (which can be programmed to do all sorts of things), or some other sort of dedicated circuit like an FPGA (which isn't a processor at all). However, these days, there are lots of different kinds of DSPs - from a DSP specifically designed to do Dolby decoding, to something like a Sharc which can be programmed to do whatever you want it to. Some people also take the distinction to be that, because a DSP is a processor that you program, a DSP can be updated by updating its firmware. (this is in fact true for most but not all DSPs; just as some DSPs can be totally reprogrammed, but many offer a fixed set of building blocks, and only allow you to program how the various modules inside interact with each other). The inference is that things that run on a DSP are more likely to be updated (by a firmware update), while "hardware" tends to be more "locked in" And, going even further, something that is "processor based" can be more completely changed by changing the firmware (think a streamer client running on a Raspberry Pi; you can even change the Linux variant it's running if you want). In the end, all that really matters is how well it all works to get the job done..... For example, our XMC-1 has several DSPs in it, including the one that does the Dolby and DTS decoding, but the main system is Linux, running on what seems more like a "general purpose processor" than a dedicated audio processor. (I forget what the manufacturer calls it in their literature.... but it's basically a computer running Linux.) And our power amps, where all the audio circuitry is entirely analog, have a little processor that runs the front panel display and buttons. Perhaps I should have been more specific, I am not aware of any processors that use dedicated DSP's to decode Atmos being able to render more than 11 channels simultaneously. Is the RMC-1 using a new Atmos DSP chipset. Also will it be able to render 7.3.6 as well as 9.3.4?
|
|
KeithL
Administrator
Posts: 10,247
|
Post by KeithL on Nov 30, 2016 15:07:10 GMT -5
Actually, according to their technical descriptions, MQA is in fact lossy. What they're doing is "taking part of the information, compressing it losslessly, then storing it in an area of the spectrum/amplitude area that previously contained only noise and no useful information". (Their analog to oregami is flawed; when you fold a piece of paper, the paper underneath is still there; when you write one piece of data over another, the original that previously occupied the space is DESTROYED.) As per their description, they ARE overwriting part of the original signal, which is then "discarded", and this is the definition of a lossy CODEC. The fact that they claim that what they're overwriting is "an area that contains only noise and no useful information" makes it a perceptually encoded lossy CODEC. They're claiming that what they're discarding really is inaudible - and so you won't miss it; however, technically, it is present in the original and not in the encoded copy. NOTE that they do make an excellent case for claiming that they only discard inaudible noise, and that they gain significant benefits by using the space for something else. However, if you decode the signal, and compare it to the original, what you get back will NOT be a bit-perfect copy of the original. (So it doesn't fit the definition of "lossless".) Okay, I definitely don't know anything about MQA, or MP3, or any other digital audio encoding format for that part. I'm a Computer Guy and I understand Algorithms in general, and when faced with a problem, I do the research then on the subject area to see what I can offer. And in this case, I'm not even doing that. Instead, I'd like to take a step back from the edge a bit and ask: Why are we even concerned with Lossy Compression Encodings any more? Storage and Bandwidth are more than adequate to cope with even high-resolution, non-lossy formats. So why invest the effort to try to develop a new one? What resource are we saving that's so precious to justify the effort? Again, just curious, not trying to offer judgement. Casey AFAIK MQA is NOT lossy...it encodes some frequencies at lower um...well, "down in the noise", so to speak...but they're extracted and fully re-constructed at their original volume/magnitude/whatever on playback. Kind of ingenious in that regard, but not convinced it inherently sounds "better" than anything else yet...
|
|
|
Post by Jim on Nov 30, 2016 18:51:49 GMT -5
By the way, there is an even bigger reason for not waiting for the XMC-2, but you'll all have to wait for the official announcement next week. Seriously! Let me just note that all good things come to an end... more to come on this. Now, don't freak out everyone, just get ready to get off the fence if you're sitting on it. Metaphorically speaking... Peace out, Big Dan Any hints as to when this week the big announcement is coming? Dan Laufman - nothing was mentioned in the podcast (even in passing).. is there still some big announcement that's coming this week? Glad to hear about cool new products in the pipeline! Thanks!
|
|
|
Post by Jim on Dec 1, 2016 16:05:49 GMT -5
|
|
|
Post by rbk123 on Dec 1, 2016 16:32:06 GMT -5
|
|
|
Post by rhale64 on Dec 1, 2016 16:58:37 GMT -5
They did mention the two object based processors and said they weren't far enough along to be included in the other products.
So this discouraged me a bit. They talked about stuff coming out all the way into the middle of the first quarter. But these were not included. Because they are not far enough along.
I want one of these two products. It may be the first one that comes out. I am still hoping that is the RMC but if not I will assess the rumours at that time and make my decision based on those.
I really wish Dan would come in and say something.
|
|
|
Post by Casey Leedom on Dec 1, 2016 17:34:52 GMT -5
It's very difficult for any manufacturer to pass on early new product information:
1. Plans change and this can lead to customer surprise/disappointment/etc.
2. Long lead-time product announcements can cannibalize current product sales.
3. Attempts to address comments by at-large audiences can lead to "design by committee" results.
Unfortunately, a completely "closed" product development cycle often leads to the opposite problem and customers who feel their interests/needs aren't being heard.
Companies will often conduct Customer Surveys and recruit a very small number of early reviewers/testers to try to find a happy medium. But no matter how you slice it, it's a hard job "threading the needle".
Casey
|
|
|
Post by goozoo on Dec 1, 2016 18:26:34 GMT -5
With the advent of the MC700 and the multiple iterations of the XMC-1 on the horizon, it is a shame that they will not offer a HT only processor with the same capabilities as the RMC. It would be cheaper and probably outsell the rest of the processor line. Really all that would be the needed would be the RMC, RMC(HT processor only), XMC1, and MC700. You hit all the market demographics and keep your parts/production costs down. Just a thought Dan, Lonnie, et. al.
|
|
|
Post by Jim on Dec 1, 2016 19:28:26 GMT -5
With the advent of the MC700 and the multiple iterations of the XMC-1 on the horizon, it is a shame that they will not offer a HT only processor with the same capabilities as the RMC. It would be cheaper and probably outsell the rest of the processor line. Really all that would be the needed would be the RMC, RMC(HT processor only), XMC1, and MC700. You hit all the market demographics and keep your parts/production costs down. Just a thought Dan, Lonnie, et. al. Is that all speculation regarding them not offering a RMC-1 like processor? I've never seen that claimed -and I'd be surprised if it doesn't come out eventually given what Dan has said... and the backplate. It's a waiting game - and I understand it. I have no gripes. I'm just wondering if some announcement is coming like Dan suggested. I'm not asking for more technical information - because unfortunately the pitch forks come out too often. But if it's volunteered, I'm happy. I just wonder what "all good things must come to an end" means.
|
|
|
Post by yves on Dec 2, 2016 5:37:51 GMT -5
Actually, according to their technical descriptions, MQA is in fact lossy. What they're doing is "taking part of the information, compressing it losslessly, then storing it in an area of the spectrum/amplitude area that previously contained only noise and no useful information". (Their analog to oregami is flawed; when you fold a piece of paper, the paper underneath is still there; when you write one piece of data over another, the original that previously occupied the space is DESTROYED.) As per their description, they ARE overwriting part of the original signal, which is then "discarded", and this is the definition of a lossy CODEC. The fact that they claim that what they're overwriting is "an area that contains only noise and no useful information" makes it a perceptually encoded lossy CODEC. They're claiming that what they're discarding really is inaudible - and so you won't miss it; however, technically, it is present in the original and not in the encoded copy. NOTE that they do make an excellent case for claiming that they only discard inaudible noise, and that they gain significant benefits by using the space for something else. However, if you decode the signal, and compare it to the original, what you get back will NOT be a bit-perfect copy of the original. (So it doesn't fit the definition of "lossless".) AFAIK MQA is NOT lossy...it encodes some frequencies at lower um...well, "down in the noise", so to speak...but they're extracted and fully re-constructed at their original volume/magnitude/whatever on playback. Kind of ingenious in that regard, but not convinced it inherently sounds "better" than anything else yet... A minimum distance of at least 3 bits is kept between the noise floor of the recording itself and the encapsulated data. The claim that nobody can hear details that far below the noise floor given normal amplification of ~115 dB SPL peaks is fairly rock solid so technically Johnson noise starts to kick in at -120 dB FS and a 24-bit channel will give you 144 dB, and, even more technically the meaning of number of bits also depends on whether you can correctly apply the science principles of human audible perception rather than merely IGNORE these science principles on every popping occasion of course.
|
|
lgjr
Minor Hero
Posts: 57
|
Post by lgjr on Dec 2, 2016 7:58:31 GMT -5
I'm hoping the next processors allow you to have at least two Dirac profiles saved and assignable at the same time. That way you can optimize one for music and the other for movies or whatever you choose.
XMC1 ATI SIGNATURE 6003 Sunfire Cinema Grand Sig surround/rears Panasonic 65ZT65 Oppo bdp83 Legacy Focus SE Legacy Marquis Legacy Phantoms SURROUNDS Legacy Phantoms SURROUNDS XTZ SOUND SUB3 X 2
|
|
|
Post by vneal on Dec 2, 2016 8:56:18 GMT -5
The product that isn't but goes on and on and on.................
|
|
edrummereasye
Sensei
"This aggression will not stand, man!"
Posts: 438
|
Post by edrummereasye on Dec 7, 2016 12:43:45 GMT -5
Actually, according to their technical descriptions, MQA is in fact lossy. What they're doing is "taking part of the information, compressing it losslessly, then storing it in an area of the spectrum/amplitude area that previously contained only noise and no useful information". (Their analog to oregami is flawed; when you fold a piece of paper, the paper underneath is still there; when you write one piece of data over another, the original that previously occupied the space is DESTROYED.) As per their description, they ARE overwriting part of the original signal, which is then "discarded", and this is the definition of a lossy CODEC. The fact that they claim that what they're overwriting is "an area that contains only noise and no useful information" makes it a perceptually encoded lossy CODEC. They're claiming that what they're discarding really is inaudible - and so you won't miss it; however, technically, it is present in the original and not in the encoded copy. NOTE that they do make an excellent case for claiming that they only discard inaudible noise, and that they gain significant benefits by using the space for something else. However, if you decode the signal, and compare it to the original, what you get back will NOT be a bit-perfect copy of the original. (So it doesn't fit the definition of "lossless".) Thanks Keith, I stand corrected...and I should have recalled/realized that based on what I recall of how it works - you can't re-produce part of the signal "somewhere else" (lower amplitude/"down in the noise floor"/whatever) without over-writing what's there already. I suppose you could theoretically also include a losslessly-compressed version of the original information, and use it to re-construct the original, that would only make sense _if_ the information you were replacing compressed much better than the information you were putting there would...and wouldn't be the most elegant solution, even then. So yes...lossy, with "perceptual-encoding"...and maybe only "inaudible" stuff thrown away (though interestingly, they claim an audible difference on playback) But I'm not sure that airplane is going to fly...which may just be me - being an HTPC guy, I still recall a time when we had to go to great lengths to achieve "bit-perfect playback", and it's hard for me to imagine being satisfied with something that doesn't pass that test (plus the DTS-HD discs I use for testing bit-perfectness are actually of stuff I like So, to address the question that my previous post was attempting to answer...and rather than rely on my obviously imperfect memory this time, I went to the website for info...(and I have to say, you have to dig deep to find any hint that it's NOT lossless; not sure if the white paper I read before is linked there or not). But anyway, the central claims to greatness (audio-quality-wise) seem to be (1) starting with the master recording (and being able to authenticate the result) (2) eliminating traditional A-D/D-A conversion methods (thus eliminating pre- and post-ringing, thus preserving timing information...they claim down to 8uS, or 15x better than 192/24). Supposedly the brain relies on this timing information to form a "3-D perception" of sound. They also claim that even on equipment without MQA decoding, it will sound "slightly better than CD"...and that their goal was "to do no more damage to sound than travelling a short distance through air" (~15 feet or so, IIRC). The other claim is that it does this all with a minimal footprint (file size), compared to traditional "high-res" files.
|
|
KeithL
Administrator
Posts: 10,247
|
Post by KeithL on Dec 7, 2016 15:34:21 GMT -5
They seem to be making a lot of DIFFERENT claims... which is what makes it so confusing. 1) The idea of "end to end optimization" only works for new content.... if your master is a 30 year old analog tape then you can't control the first few steps of the process because they were done long ago. 2) One minute they're talking about getting approval for the encoded content, and authenticating that what you're listening to is exactly what the studio intended. The next minute, or five minutes ago, they were saying that they'll encode whatever copy the studio sends them, and take the studio's word that they were given the best copy "because we aren't the quality police". While there's no direct contradiction there, one statement seems to suggest that they're going to go to great lengths to ensure that you get the best copy possible, while the other seems to suggest that they'll just feed whatever the studio sends them through the converter. 3) They've as much as said that there are different LEVELS of processing. One is to carefully hand-process the recording and attempt to reverse engineer the flaws in the original conversion process based on research about the original equipment used to encode it. The other seems to suggest that their encoder is able to automatically detect and correct most common flaws. There is a clear implication that there will be a "basic version" and a "premium version" of this processing - so how will you know which one you got? 4) Separate from all this, they are claiming that all "MQA certified DACs" not only have the ability to decode MQA, but also have been carefully optimized to deliver an accurate time response. (In other words, they're claiming that any DAC with their sticker on it has benefits that really have nothing specific to do with their process... although they contribute to that DAC being able to do a good job of reproducing their content.) On one hand, this could be useful. But, on the other, it makes it difficult to know if it's their content that sounds better, or if it's just a better sounding DAC it's playing through. 5) I noticed that, in their technical descriptions, they did use the word " LOSSLESS" whenever possible to describe the parts of the process that are in fact lossless, and I don't recall a single instance of seeing the word " LOSSY" anywhere. They didn't lie, or even technically say anything that was specifically misleading, but they sure didn't go out of the way to describe MQA as "a new and better lossy CODEC" either. 6) As you noted, a lot of people seem to agree that their processing makes the file sound DIFFERENT, and most seem to find the difference an improvement. However, it can often be difficult to determine the difference between different and better - especially if there's no option to compare both to the original, or to turn off the modification and compare the versions with and without it. 7) Note that it's not at all unreasonable to suggest that, by correcting certain legitimate problems, and only discarding truly useless information, their process might actually deliver a NET IMPROVEMENT in sound. However, I am also inclined to note that very few files have been made available for review, and I assume those were carefully chosen and subjected to the "premium level" of processing. In other words, we've heard that reviewers reacted favorably to a few dozen files, and we've also heard that all the thousands of albums in Warner's library have been "processed", but we haven't heard very many of those albums. (Of course they cherry picked the ones that came out well to send to reviewers... so would I. ) So, as far as I'm concerned, I'm still waiting to hear how much benefit we'll see on MOST albums..... if any. For starters, I'm waiting to see all those 'remastered Warner albums" available on Amazon (especially if they're supposed to sound better even on my non-MQA DAC.) 8) They are also marketing their CODEC as being able to deliver a higher quality signal than others at reduced bit rates - which is indeed useful and valuable for streaming services and their customers. However, this is a whole different piece of the puzzle from the rest. Are we suggesting that every album you listen to on Tidal will benefit from ALL the improvements their processing offers? Or are we suggesting that they will be able to deliver a stream that sounds as good as the original CD - but not necessarily better - using less bandwidth? Or are we expecting both benefits? (If they actually deliver slightly better than CD quality over lower bandwidth, then they will have a very worthwhile product.... but that won't necessarily make a compelling argument to buy the other parts of their ecosystem.... like a special DAC. They're obviously trying to position it as a package deal... but that may not be the case.) To me, the exact answers to all these questions are still somewhat vague...... Much as their promise that "MQA will deliver audio to you at the same quality or better than the master".... "but, of course, the studio doesn't actually want to give away the master". (When you sum up their claims, it sounds as if "the studio IS willing to give you the master - but only if you buy an MQA DAC to play it on"...... which is... interesting.) I'm waiting to see what actually happens..... But, if they continue to deliver confusion, and don't back it up with some compelling results sometime soon, eventually people are going to lose interest. Remember DVD-A ? Remember HDCD ? Actually, according to their technical descriptions, MQA is in fact lossy. What they're doing is "taking part of the information, compressing it losslessly, then storing it in an area of the spectrum/amplitude area that previously contained only noise and no useful information". (Their analog to oregami is flawed; when you fold a piece of paper, the paper underneath is still there; when you write one piece of data over another, the original that previously occupied the space is DESTROYED.) As per their description, they ARE overwriting part of the original signal, which is then "discarded", and this is the definition of a lossy CODEC. The fact that they claim that what they're overwriting is "an area that contains only noise and no useful information" makes it a perceptually encoded lossy CODEC. They're claiming that what they're discarding really is inaudible - and so you won't miss it; however, technically, it is present in the original and not in the encoded copy. NOTE that they do make an excellent case for claiming that they only discard inaudible noise, and that they gain significant benefits by using the space for something else. However, if you decode the signal, and compare it to the original, what you get back will NOT be a bit-perfect copy of the original. (So it doesn't fit the definition of "lossless".) Thanks Keith, I stand corrected...and I should have recalled/realized that based on what I recall of how it works - you can't re-produce part of the signal "somewhere else" (lower amplitude/"down in the noise floor"/whatever) without over-writing what's there already. I suppose you could theoretically also include a losslessly-compressed version of the original information, and use it to re-construct the original, that would only make sense _if_ the information you were replacing compressed much better than the information you were putting there would...and wouldn't be the most elegant solution, even then. So yes...lossy, with "perceptual-encoding"...and maybe only "inaudible" stuff thrown away (though interestingly, they claim an audible difference on playback) But I'm not sure that airplane is going to fly...which may just be me - being an HTPC guy, I still recall a time when we had to go to great lengths to achieve "bit-perfect playback", and it's hard for me to imagine being satisfied with something that doesn't pass that test (plus the DTS-HD discs I use for testing bit-perfectness are actually of stuff I like So, to address the question that my previous post was attempting to answer...and rather than rely on my obviously imperfect memory this time, I went to the website for info...(and I have to say, you have to dig deep to find any hint that it's NOT lossless; not sure if the white paper I read before is linked there or not). But anyway, the central claims to greatness (audio-quality-wise) seem to be (1) starting with the master recording (and being able to authenticate the result) (2) eliminating traditional A-D/D-A conversion methods (thus eliminating pre- and post-ringing, thus preserving timing information...they claim down to 8uS, or 15x better than 192/24). Supposedly the brain relies on this timing information to form a "3-D perception" of sound. They also claim that even on equipment without MQA decoding, it will sound "slightly better than CD"...and that their goal was "to do no more damage to sound than travelling a short distance through air" (~15 feet or so, IIRC). The other claim is that it does this all with a minimal footprint (file size), compared to traditional "high-res" files.
|
|
|
Post by rbk123 on Dec 8, 2016 20:34:31 GMT -5
By the way, there is an even bigger reason for not waiting for the XMC-2, but you'll all have to wait for the official announcement next week. Seriously! Let me just note that all good things come to an end... more to come on this. Now, don't freak out everyone, just get ready to get off the fence if you're sitting on it. Metaphorically speaking... Peace out, Big Dan Any hints as to when this week the big announcement is coming? Bump.
|
|
|
Post by Casey Leedom on Dec 8, 2016 20:56:53 GMT -5
Given that C.E.S. 2017 is January 5-8 and that's less than a month away and there are lots of non-work distractions like the holidays/family/friends/etc. between now and then, I'm guessing that the Emotiva Team are pretty Head Down right now. Looking at the calendar, it looks like they have about 11 more working days to get anything done that they want to show. Sure, they can throw in a few hours on the weekends, but they all have families and lives too. I'll just wait till the announcement at C.E.S. 2017; it's only four weeks away.
Casey
|
|
|
Post by mickseymour on Dec 9, 2016 1:38:45 GMT -5
Any hints as to when this week the big announcement is coming? Bump. Didn't I see a post from Dan or Keith saying the new HDMI board will have all ports HDMI 2.0b and HDCP 2.2 AND it is home installable? That feels like a big announcement and reason to upgrade rather than wait for the XMC-2.
|
|
|
Post by cwt on Dec 9, 2016 5:28:06 GMT -5
Didn't I see a post from Dan or Keith saying the new HDMI board will have all ports HDMI 2.0b and HDCP 2.2 AND it is home installable? That feels like a big announcement and reason to upgrade rather than wait for the XMC-2. Nice thought and fits the bill but not momentous enough I perceive ; if you read this as a whole thought process without the paragraphs ; it sounds like an EOL announcement with the "all good things" quip ; almost but for the modularity promises ; but in any case something very tempting ;)No coincidence the xmc2 announcement timing has an influence on all this..
|
|