|
Post by novisnick on Oct 7, 2021 11:13:24 GMT -5
One many treasure is another's trash! And here we GO again! 😋While sympathizing with the sentiment, still, stay out of the MQA dumpster - Rich P.S. Also, don't buy Hunter's artwork I’ve always assumed that you were the high bidder! Keep your trash out of my joy! 😳
|
|
richb
Sensei
Oppo Beta Group - Audioholics Reviewer
Posts: 890
|
Post by richb on Oct 7, 2021 12:19:43 GMT -5
While sympathizing with the sentiment, still, stay out of the MQA dumpster - Rich P.S. Also, don't buy Hunter's artwork I’ve always assumed that you were the high bidder! Keep your trash out of my joy! 😳 I strongly suspect, Emotiva will NOT implement MQA support. It is primarily a HT processor and there are features still pending in that space such as permitting all upmixers (DTS to Dolby, Dolby to DTS), DTS Pro, and and Dirac Live bass management. As far as MQA is considered, technically, it is inferior the real HD-Audio in all regards. - Rich
|
|
|
Post by novisnick on Oct 7, 2021 13:12:43 GMT -5
I’ve always assumed that you were the high bidder! Keep your trash out of my joy! 😳 I strongly suspect, Emotiva will implement MQA support. It is primarily a HT processor and there are features still pending in that space such as permitting all upmixers (DTS to Dolby, Dolby to DTS), DTS Pro, and and Dirac Live bass management. As far as MQA is considered, technically, it is inferior the real HD-Audio in all regards. - Rich I, don’t believe Emotiva will incorporate MQA into anything. Demand would need to be there as it is no longer new to the market. Inferior? I do like the sound of many many MQA encoded music. Other music, it doesn’t do anything for. Really doesn’t matter what you or anybody else thinks of it IMO.
|
|
|
Post by JNieves on Oct 8, 2021 9:19:48 GMT -5
P.S. Also, don't buy Hunter's artwork But - I have to complete my collection of $500,000 finger paintings!
|
|
|
Post by aswiss on Oct 9, 2021 6:22:12 GMT -5
I strongly suspect, Emotiva will implement MQA support. It is primarily a HT processor and there are features still pending in that space such as permitting all upmixers (DTS to Dolby, Dolby to DTS), DTS Pro, and and Dirac Live bass management. As far as MQA is considered, technically, it is inferior the real HD-Audio in all regards. - Rich I, don’t believe Emotiva will incorporate MQA into anything. Demand would need to be there as it is no longer new to the market. Inferior? I do like the sound of many many MQA encoded music. Other music, it doesn’t do anything for. Really doesn’t matter what you or anybody else thinks of it IMO. For me, MQA is Marketing and profit to MQA. I'm on Qobuz, so no MQA anyway but lots of HiRes Audio - and I have an external Network Streamer/DAC which is capable to read MQA, if there would be a need. as richb wrote - please bring the HT related stuff.
|
|
|
Post by krobar on Oct 10, 2021 6:37:39 GMT -5
Maybe I was unclear but your response is generalised and we were talking about DTS:X. AFAIK the limits for DTS:X are as follows, if this is wrong please let me know. DTS:X Max Supported Input Channels: 15.2 (15 assumes 0 objects; combined base channel and object limit is 15) Max Supported Objects: 15 (Assumes 0 fixed base channels other than up to 2 LFE) Max Supported Output Channels: 11.1 (7.1.4, 9.1.2 etc.) DTS:X Pro Max Supported Input Channels: 15.2 (15 assumes 0 objects; combined base channel and object limit is 15) Max Supported Objects: 15 (Assumes 0 fixed base channels other than up to 2 LFE) Max Supported Output Channels: 13.1 to 30.2 (Depends on system) I'm not differentiating between height and floor level channels above because DTS:X limits do not but LFE channels are separate. There are some bit rate limitations as well. AFAIK that is why the Trinnov demo uses DTS HD HR because trying to use MA exceeded the bitrate limits for them. I think I need to clarify something here that seems to confuse a lot of people... (and Trinnov did not seem to make it any clearer in their article). 1. CHANNELS ARE NOT OBJECTS and OBJECTS ARE NOT CHANNELS. 2. There is no specific reason why the number of objects and the number of channels must or even should coincide.
An "object oriented immersive sound track" consists of two things: 1) bed channels - which are static tracks that are intended to be played from one or more specific speakers - the Left Front channel is a bed channel
2) objects - which are individual sounds which are assigned to one or more speakers by the renderer at playback time - the alien spaceship flying in circles over your head could be mixed into the bed channels or it could be a sound object - if it is mixed into the bed channels it will always play from the same speaker or speakers (unless it is mixed into other speakers because your system doesn't include the speaker it is assigned to) - if it is a sound object it may play from different speakers on different systems - depending on how many speakers you have and which ones the renderer decides to assign it to
The number of bed tracks that can be handled, and the ability to upmix them, is one characteristic of a particular system. The number of objects that can be handled, how they are assigned, and where they can be assigned, is another characteristic. And the number of output channels (speakers) that are supported is another distinct thing.
But, even though these are all related, they are separate things, and must be considered separately.
(And, yes, having more channels, and being able to handle more objects, both contribute to being able to position sounds more precisely around you.)
So for example: You can have five channels and no objects at all (all bed channels)...
Or you can have twenty objects in five channels (with a whole bunch of stuff in the bed channels - or almost nothing at all)... Or you can have five objects in twenty channels (with or without anything much in the bed channels)... Currently most movies have a lot of relatively static content in the bed channels... and reserve the objects for unique or individual sounds that move around a lot...
However, that is no means required, and may especially not be true at all for movies that have been converted to Dolby Atmos from an older format...
In fact, these choices often come down to philosophy, on the part of the sound engineer... Would he or she prefer to place that sound "in the front left channel" or "ahead and 45 degrees to the left of the listener"? (Note that, depending on your specific system, there may be a subtle distinction between the results of those two choices.)
There isn't one as there is no such thing as DTS:X Pro content. Receivers/Prepros with DTS:X are limited to 12 channels (eg. 7.1.4) but DTS:X Pro Receivers/Prepros support between 14 and 32 channels. DTS:X content supports up to 15 Channels/Objects and 2 Subwoofers. DTS:X Pro can upmix channel based content to additional speakers. Objects can move between multiple speakers but if they are mixed in a static position this can be problematic as they cannot be upmixed. Object based DTS:X content is rare. Most DTS:X content (eg. Most Universal releases) is channel based 7.1.4 which DTS:X Pro can upmix further. Some of the WellGo USA releases were a mix of channels and objects. Most Imax Enhanced releases are 7.1.4 plus a single static object a little below the centre height position.
|
|
tparm
Minor Hero
Posts: 16
|
Post by tparm on Oct 10, 2021 7:26:06 GMT -5
Speaking of the AKM 4490 DAC within the G3 pre-pros, how many of you use this built in DAC for digital music listening? Or do you have and prefer another outboard DAC and if so which ones? This might have been beaten to death already but being a new adopter thought it would be an interesting discussion. Thanks. PS: Since acquiring the RMC-1L I have been listening using its DAC via AES/EBU, USB from M1 Mac Mini, and occasionally Coax, a lot lately and have been impressed way more than I would have thought. Have not used a Delta Sigma or is that Sigma Delta DAC in a long time but so far I'm impressed. Look forward to what others have to say. I run my Node and ERC-4 through a Gustard X16 DAC and then XLR to the 1L; this in Reference Stereo sounds better to me than digital from sources to the 1L and using the internal DACs. I also used Preset 2 to set my mains to large (LSiM 707s) and ran Dirac, still prefer the uncorrected X16 for 2CH.
|
|
|
Post by hsamwel on Oct 10, 2021 14:33:14 GMT -5
Speaking of the AKM 4490 DAC within the G3 pre-pros, how many of you use this built in DAC for digital music listening? Or do you have and prefer another outboard DAC and if so which ones? This might have been beaten to death already but being a new adopter thought it would be an interesting discussion. Thanks. PS: Since acquiring the RMC-1L I have been listening using its DAC via AES/EBU, USB from M1 Mac Mini, and occasionally Coax, a lot lately and have been impressed way more than I would have thought. Have not used a Delta Sigma or is that Sigma Delta DAC in a long time but so far I'm impressed. Look forward to what others have to say. I have three music sources.. Two connected with both analog and digital, one is limited to digital only. Pioneer UDP-LX800 connected with two HDMI’s and and balanced analog. One of the HDMI’s is a audio dedicated transfer with video turned off. With this I play CD’s and SACD’s. With the balanced I can compare the ESS 9026 PRO dacs from LX800 to the AKM 4490 in RMC-1. Pioneer N-70AE is connected with optical and RCA analog. It also has a ESS DAC although a 9016. It’s like LX800 a fully balanced design. But RMC-1 only have one balanced input. But I mostly use the digital input now that I use Dirac. Because of this, and some other things like Roon, I bought the Primare NP5 which is a network streamer without a DAC. I have this connected with a coax digital to the RMC-1. As mentioned before I mainly use the RMC-1 AKM4490 DAC due to Dirac. But sometimes I try Reference Stereo.. But it does not sound as good. Less clear voices and boomier bass in my room.
|
|
|
Post by hsamwel on Oct 10, 2021 15:01:51 GMT -5
So, is there a possibility of Emotiva adding full rendering of MQA to the gen 3 processors? I have seen other (2ch) integrated with the same DAC (AKM4490) that has full rendering of MQA. Specificly Hegel H390/590 series. Is it even possible or does MQA need other hardware for the full third stage rendering? I sure hope not. It is an complete and utter grift. If you want hi-res streaming, try QOBUZ, Apple, Amazon... - Rich If I remember correctly the first unfold is pretty much a full 88.2/96khz 24bit without much if any change at all from a lossless version. It’s the second and third that does some changes. If it is worse than resampled (upsampled) hirez from Qubuz I can’t judge. But there should be a difference in the frequency. Just by what MQA is said to contain. TIDAL uses this for most of its content. Too me that only listens to the first unfold it sounds really good. I mostly asked if it is at all possible to implement. Maybe needs some extra hardware for the full rendering? There aren’t that many devices that do the full rendering, mostly really high end hifi gear. Strange that most high end brands have added MQA if its so bad?
|
|
|
Post by hsamwel on Oct 10, 2021 15:15:46 GMT -5
Maybe I was unclear but your response is generalised and we were talking about DTS:X. AFAIK the limits for DTS:X are as follows, if this is wrong please let me know. DTS:X Max Supported Input Channels: 15.2 (15 assumes 0 objects; combined base channel and object limit is 15) Max Supported Objects: 15 (Assumes 0 fixed base channels other than up to 2 LFE) Max Supported Output Channels: 11.1 (7.1.4, 9.1.2 etc.) DTS:X Pro Max Supported Input Channels: 15.2 (15 assumes 0 objects; combined base channel and object limit is 15) Max Supported Objects: 15 (Assumes 0 fixed base channels other than up to 2 LFE) Max Supported Output Channels: 13.1 to 30.2 (Depends on system) I'm not differentiating between height and floor level channels above because DTS:X limits do not but LFE channels are separate. There are some bit rate limitations as well. AFAIK that is why the Trinnov demo uses DTS HD HR because trying to use MA exceeded the bitrate limits for them. I think I need to clarify something here that seems to confuse a lot of people... (and Trinnov did not seem to make it any clearer in their article). 1. CHANNELS ARE NOT OBJECTS and OBJECTS ARE NOT CHANNELS. 2. There is no specific reason why the number of objects and the number of channels must or even should coincide.
An "object oriented immersive sound track" consists of two things: 1) bed channels - which are static tracks that are intended to be played from one or more specific speakers - the Left Front channel is a bed channel
2) objects - which are individual sounds which are assigned to one or more speakers by the renderer at playback time - the alien spaceship flying in circles over your head could be mixed into the bed channels or it could be a sound object - if it is mixed into the bed channels it will always play from the same speaker or speakers (unless it is mixed into other speakers because your system doesn't include the speaker it is assigned to) - if it is a sound object it may play from different speakers on different systems - depending on how many speakers you have and which ones the renderer decides to assign it to
The number of bed tracks that can be handled, and the ability to upmix them, is one characteristic of a particular system. The number of objects that can be handled, how they are assigned, and where they can be assigned, is another characteristic. And the number of output channels (speakers) that are supported is another distinct thing.
But, even though these are all related, they are separate things, and must be considered separately.
(And, yes, having more channels, and being able to handle more objects, both contribute to being able to position sounds more precisely around you.)
So for example: You can have five channels and no objects at all (all bed channels)...
Or you can have twenty objects in five channels (with a whole bunch of stuff in the bed channels - or almost nothing at all)... Or you can have five objects in twenty channels (with or without anything much in the bed channels)... Currently most movies have a lot of relatively static content in the bed channels... and reserve the objects for unique or individual sounds that move around a lot...
However, that is no means required, and may especially not be true at all for movies that have been converted to Dolby Atmos from an older format...
In fact, these choices often come down to philosophy, on the part of the sound engineer... Would he or she prefer to place that sound "in the front left channel" or "ahead and 45 degrees to the left of the listener"? (Note that, depending on your specific system, there may be a subtle distinction between the results of those two choices.)
So what’s the limits of Atmos? I guess they have less of of a bitrate issue at least? I would think 15 objects are pretty much IRL. If I remember correctly Atmos works a little different with the bed locked as channels always and then having objects for the rest of the channels?
|
|
|
Post by jbrunwa on Oct 10, 2021 16:07:56 GMT -5
I would like to see a list of outstanding bugs at the top so anyone who is experiencing the same bugs can see that their bug is already captured. I’m also hoping that we can track which release the bug was resolved. The goal is to avoid seeing a hundred posts of one bug that gives the perception that there are a 100 problems vs 1 problem. I intend to use the 2nd post that has the current public firmware "Release Notes", this way reported bugs can be listed under those notes. This will also allow Lonnie/Ray/Damon/KeithL to be able to see a list of what bugs are being reported and how to reproduce them. As the issue gets fixed, it will be marked as such. Is this good? If any of you have a better way, we're open to whatever works best for communication purposes. I still think a list of outstanding bugs is needed and would save Emotiva and customers a lot of time and money.
|
|
KeithL
Administrator
Posts: 10,273
|
Post by KeithL on Oct 11, 2021 11:31:46 GMT -5
The short answer is that there is no short answer.
The CINEMA version of Atmos essentially supports up to 10 bed channels and up to 118 additional individual objects.
However, being able to encode that many discrete objects, at full TrueHD quality, would require A LOT more bandwidth than would fit on a Blu-Ray disc (even not counting the video).
Therefore, with the HOME version, such as is found on Dolby Atmos Blu-Ray discs, some of the objects may be combined into "object groups" to save bandwidth.
Note 1: Remember that, when we talk about how many objects you can have, we're talking about simultaneous objects.)
Note 2: Beds and objects are different things; and, yes, Atmos has "fixed beds" and "discrete objects". However, in the home version, some of those objects may be members of "spatially encoded object groups".
It's also worth noting that most movies probably don't use more than a few discrete objects or groups of objects to begin with. (For much the same reason that armies are subdivided into platoons, and squads, rather than the general addressing each soldier individually.)
Let's assume that you have a flight of a dozen bombers, one gets hit, and crashes into the ground below. The sound engineer could have used a separate object for each plane... But, since they're all flying as a single group, it's much more efficient to use one large object for the complete squadron...
Then, when that one plane is hit, he or she will create a new object that represents just that one plane. (Just as you might order an entire platoon to move forward, tell one squad to flank left, then order one individual soldier to scout ahead.)
The most important thing to remember is this. The purpose of adding more speakers is NOT to allow the proper handling of more objects or channels. The overall goal is not to produce a bunch of channels; the goal is to produce a single cohesive sound stage.
The purpose of adding more speakers is get more precise control over where sounds originate in your room. Even in a simple stereo system, in a symmetrical room, it can be difficult to accurately position individual sounds in the sound stage. This is even more difficult with a 3-dimensional sound stage in a real physical room. The more completely your speakers "cover the entire sound field" the more accurately you're going to be able to control where each sound appears to originate. (Much as a center channel speaker can help make even simple centered sounds seem more distinctly located in a stereo setup.) The only thing that matters is how well a particular "immersive sound system" achieves the ability to create an accurate - or at least pleasing - 3D sound stage.
Maybe I was unclear but your response is generalised and we were talking about DTS:X. AFAIK the limits for DTS:X are as follows, if this is wrong please let me know. DTS:X Max Supported Input Channels: 15.2 (15 assumes 0 objects; combined base channel and object limit is 15) Max Supported Objects: 15 (Assumes 0 fixed base channels other than up to 2 LFE) Max Supported Output Channels: 11.1 (7.1.4, 9.1.2 etc.) DTS:X Pro Max Supported Input Channels: 15.2 (15 assumes 0 objects; combined base channel and object limit is 15) Max Supported Objects: 15 (Assumes 0 fixed base channels other than up to 2 LFE) Max Supported Output Channels: 13.1 to 30.2 (Depends on system) I'm not differentiating between height and floor level channels above because DTS:X limits do not but LFE channels are separate. There are some bit rate limitations as well. AFAIK that is why the Trinnov demo uses DTS HD HR because trying to use MA exceeded the bitrate limits for them. So what’s the limits of Atmos? I guess they have less of of a bitrate issue at least? I would think 15 objects are pretty much IRL. If I remember correctly Atmos works a little different with the bed locked as channels always and then having objects for the rest of the channels?
|
|
KeithL
Administrator
Posts: 10,273
|
Post by KeithL on Oct 11, 2021 11:44:15 GMT -5
I think you will find that this varies quite widely between different movies and concerts.
When it comes to synthesizing a surround sound or 3D sound field... The various decoders use different criteria, usually related to relative phase and amplitude, to decide what sounds to assign where, and at what level... Basically, they're "guessing" where each sound should be located, and those guesses may be based on different things... As a result of this, each specific decoder tends to work better with some content, and not so good with other content...
I've always felt that Dolby's upmixer was optimized to work best with "content that was intended to be heard in surround" while Neural-X does better at "creating surround sound from scratch".
When a movie originates in surround sound, then is mixed down to stereo, or to fewer surround channels, some of the phase information from the original surround mix remains, which will affect how it is handled by an upmixer later. (Note that our processor "sees the format of the incoming audio stream" - but we "don't know how it got that way".)
I did a little experiment last night when the house was empty (no kids!). Watching a Netflix movie on my AppleTV (PCM 5.1 output), I switched between Input 1 (Auto) and Input 2 (DTS:Neural:X). The difference was shocking. Using DTS:Neural:X, the channels were level and balanced. The Center Channel was clear... The opposite was true with the Surround Decoder. For some reason, PCM 5.1 to Surround sounds like crap... When the Movie has Netflix Atmos sound, it's fine. I also found that if you set DTS:Neural:X as the default for 5.1, if you play a Netflix Movie with "Atmos", the processor switches off DTS to the correct decoder. When the family came back home and we watched a movie, everyone commented on how much better it sounded. On the bright side, I now can dodge the "why don't we use the soundbar" question! Thanks everyone for the suggestion!!!
|
|
richb
Sensei
Oppo Beta Group - Audioholics Reviewer
Posts: 890
|
Post by richb on Oct 11, 2021 16:33:52 GMT -5
I sure hope not. It is an complete and utter grift. If you want hi-res streaming, try QOBUZ, Apple, Amazon... - Rich If I remember correctly the first unfold is pretty much a full 88.2/96khz 24bit without much if any change at all from a lossless version. It’s the second and third that does some changes. If it is worse than resampled (upsampled) hirez from Qubuz I can’t judge. But there should be a difference in the frequency. Just by what MQA is said to contain. TIDAL uses this for most of its content. Too me that only listens to the first unfold it sounds really good. I mostly asked if it is at all possible to implement. Maybe needs some extra hardware for the full rendering? There aren’t that many devices that do the full rendering, mostly really high end hifi gear. Strange that most high end brands have added MQA if its so bad? MQA encoding of Hi-Res audio first down-samples the source to 88.2kHz or 96kHz. Anything above that is fake - it's MQA up-sampling the source to generate a 192kHz or 384kHz display. Even the 88.2/96kHz reproduction of titles sourced at those rates are lossy. The MQA filter in slow and leaky creating artifacts in the audible range, therefore technically inferior. MQA has masterfully captures the imagination of the audiophile press. It has occurred Apple, Amazon and others that Atmos music (though lossy in 3D reproduction) offers the possibility of immersive music, that is far more important than proprietary encoding of inaudible frequencies (the very definition of ultrasonic). - Rich
|
|
|
Post by novisnick on Oct 11, 2021 22:43:51 GMT -5
If I remember correctly the first unfold is pretty much a full 88.2/96khz 24bit without much if any change at all from a lossless version. It’s the second and third that does some changes. If it is worse than resampled (upsampled) hirez from Qubuz I can’t judge. But there should be a difference in the frequency. Just by what MQA is said to contain. TIDAL uses this for most of its content. Too me that only listens to the first unfold it sounds really good. I mostly asked if it is at all possible to implement. Maybe needs some extra hardware for the full rendering? There aren’t that many devices that do the full rendering, mostly really high end hifi gear. Strange that most high end brands have added MQA if its so bad? MQA encoding of Hi-Res audio first down-samples the source to 88.2kHz or 96kHz. Anything above that is fake - it's MQA up-sampling the source to generate a 192kHz or 384kHz display. Even the 88.2/96kHz reproduction of titles sourced at those rates are lossy. The MQA filter in slow and leaky creating artifacts in the audible range, therefore technically inferior. MQA has masterfully captures the imagination of the audiophile press. It has occurred Apple, Amazon and others that Atmos music (though lossy in 3D reproduction) offers the possibility of immersive music, that is far more important than proprietary encoding of inaudible frequencies (the very definition of ultrasonic). - Rich Blah Blah Blah how much time have you spent with a dac that fully unfolds MQA? Blah Blah I like a lot of it Blah Blah and they have recorded directly using mqa Blah Blah 🤣😂🤣😂🎶🎶🎶🎶🎶❤️❤️❤️
|
|
richb
Sensei
Oppo Beta Group - Audioholics Reviewer
Posts: 890
|
Post by richb on Oct 12, 2021 9:00:45 GMT -5
MQA encoding of Hi-Res audio first down-samples the source to 88.2kHz or 96kHz. Anything above that is fake - it's MQA up-sampling the source to generate a 192kHz or 384kHz display. Even the 88.2/96kHz reproduction of titles sourced at those rates are lossy. The MQA filter in slow and leaky creating artifacts in the audible range, therefore technically inferior. MQA has masterfully captures the imagination of the audiophile press. It has occurred Apple, Amazon and others that Atmos music (though lossy in 3D reproduction) offers the possibility of immersive music, that is far more important than proprietary encoding of inaudible frequencies (the very definition of ultrasonic). - Rich Blah Blah Blah how much time have you spent with a dac that fully unfolds MQA? Blah Blah I like a lot of it Blah Blah and they have recorded directly using mqa Blah Blah 🤣😂🤣😂🎶🎶🎶🎶🎶❤️❤️❤️ I think I have seen this response before: - Rich
|
|
|
Post by monkumonku on Oct 12, 2021 9:45:08 GMT -5
Blah Blah Blah how much time have you spent with a dac that fully unfolds MQA? Blah Blah I like a lot of it Blah Blah and they have recorded directly using mqa Blah Blah 🤣😂🤣😂🎶🎶🎶🎶🎶❤️❤️❤️ I think I have seen this response before: - Rich Captain Kirk addresses Congress?
|
|
|
Post by novisnick on Oct 12, 2021 13:31:26 GMT -5
Blah Blah Blah how much time have you spent with a dac that fully unfolds MQA? Blah Blah I like a lot of it Blah Blah and they have recorded directly using mqa Blah Blah 🤣😂🤣😂🎶🎶🎶🎶🎶❤️❤️❤️ I think I have seen this response before: - Rich You got it Captain! 😋
|
|
KeithL
Administrator
Posts: 10,273
|
Post by KeithL on Oct 12, 2021 15:35:36 GMT -5
I'll admit I haven't read up on all the technical distinctions but that agrees with my impression... The biggest difference between DTS:X and DTS:X Pro is that DTS:X Pro supports more output channels (speakers). (And, before DTS:X Pro, plain old DTS:X supported fewer output channels than Dolby Atmos.. which meant that "DTS:X was losing the channel arms race with Dolby Atmos".)
Personally, I'm a lot more interested in little details like how well the audio was mastered, than on the specific limitations of the system that was used.
Maybe I was unclear but your response is generalised and we were talking about DTS:X. AFAIK the limits for DTS:X are as follows, if this is wrong please let me know. DTS:X Max Supported Input Channels: 15.2 (15 assumes 0 objects; combined base channel and object limit is 15) Max Supported Objects: 15 (Assumes 0 fixed base channels other than up to 2 LFE) Max Supported Output Channels: 11.1 (7.1.4, 9.1.2 etc.) DTS:X Pro Max Supported Input Channels: 15.2 (15 assumes 0 objects; combined base channel and object limit is 15) Max Supported Objects: 15 (Assumes 0 fixed base channels other than up to 2 LFE) Max Supported Output Channels: 13.1 to 30.2 (Depends on system) I'm not differentiating between height and floor level channels above because DTS:X limits do not but LFE channels are separate. There are some bit rate limitations as well. AFAIK that is why the Trinnov demo uses DTS HD HR because trying to use MA exceeded the bitrate limits for them. I think I need to clarify something here that seems to confuse a lot of people... (and Trinnov did not seem to make it any clearer in their article). 1. CHANNELS ARE NOT OBJECTS and OBJECTS ARE NOT CHANNELS. 2. There is no specific reason why the number of objects and the number of channels must or even should coincide.
An "object oriented immersive sound track" consists of two things: 1) bed channels - which are static tracks that are intended to be played from one or more specific speakers - the Left Front channel is a bed channel
2) objects - which are individual sounds which are assigned to one or more speakers by the renderer at playback time - the alien spaceship flying in circles over your head could be mixed into the bed channels or it could be a sound object - if it is mixed into the bed channels it will always play from the same speaker or speakers (unless it is mixed into other speakers because your system doesn't include the speaker it is assigned to) - if it is a sound object it may play from different speakers on different systems - depending on how many speakers you have and which ones the renderer decides to assign it to
The number of bed tracks that can be handled, and the ability to upmix them, is one characteristic of a particular system. The number of objects that can be handled, how they are assigned, and where they can be assigned, is another characteristic. And the number of output channels (speakers) that are supported is another distinct thing.
But, even though these are all related, they are separate things, and must be considered separately.
(And, yes, having more channels, and being able to handle more objects, both contribute to being able to position sounds more precisely around you.)
So for example: You can have five channels and no objects at all (all bed channels)...
Or you can have twenty objects in five channels (with a whole bunch of stuff in the bed channels - or almost nothing at all)... Or you can have five objects in twenty channels (with or without anything much in the bed channels)... Currently most movies have a lot of relatively static content in the bed channels... and reserve the objects for unique or individual sounds that move around a lot...
However, that is no means required, and may especially not be true at all for movies that have been converted to Dolby Atmos from an older format...
In fact, these choices often come down to philosophy, on the part of the sound engineer... Would he or she prefer to place that sound "in the front left channel" or "ahead and 45 degrees to the left of the listener"? (Note that, depending on your specific system, there may be a subtle distinction between the results of those two choices.)
|
|
KeithL
Administrator
Posts: 10,273
|
Post by KeithL on Oct 12, 2021 16:28:52 GMT -5
Obviously, if the MQA-encoded signal was identical to the original, then it would also sound exactly the same. The reason it sounds different is that MQA alters the signal.
If you play back MQA-encoded content on a system that doesn't support MQA then you lose quality... and this is not in dispute.
(You've lost the part of the bandwidth that was used to encode the MQA stuff that you aren't using.)
The first unfold does the biggest part of the decoding process (and clients like the Tidal client can do that part in software). The second unfold recovers a bit more of the data that was stored in the MQA encoding... And the "third unfold" is really just the special upsampling filter that MQA uses (rather than the "regular upsampling filter" most DACs would be using otherwise)...
Personally I tend to find a distinction which is rarely discussed to be pretty darned important...
If something was ACTUALLY ORIGINALLY RECORDED USING MQA ENCODING... Then there is at least the potential that it will capture the details of the original performance better than if some other encoding method, like PCM or DSD, was used. However there are very few things that are currently being actually recorded in MQA... And, obviously, nothing that recorded or mastered before MQA even existed is on that list...
The second level is material which has been encoded into MQA "after the fact". This list includes EVERYTHING that was released in MQA versions but was recorded or mastered "pre-MQA". For this material MQA is simply a form of post-processing that is intended to alter the way the recording sounds in a pleasant fashion. In principle at least some material was "carefully hand processed using MQA re-encoding to reverse engineer flaws or limitations in the original recording".
However, based on current information, very little material actually receives this "white glove hand processing".
The final, and lowest, level is material that was simply passed through an automated MQA-encoding process... This processing is intended to attempt to (at least partially) "reverse engineer flaws in the original conversion to digital format and correct them"... However, since many recordings include multiple tracks, which may have even been converted using different equipment, before then being mixed together... This is, at best, a "one size fits all" solution... intended to correct for some flaws that are claimed to be common in MOST analog-to-digital conversion hardware... (A good analogy would be the way we employ sharpening in Photoshop "because it makes a lot of things look better".)
However, as a result of this limitation, it would fairly be described as "an attempt to alter the way the material sounds in a pleasing way that hopefully makes it closer to the original". (And claims that it accurately reproduces the original are somewhat over enthusiastic.)
Either way, at this point, the "MQA corrected content" could simply be delivered as a 24/96k PCM file. (They could simply "use the MQA processing to improve the content" then deliver it to you in some other more standard format.) (In a sense this is what happens when Tidal "does the first unfold in software" - and gives you a 24/96k PCM output that you can play on an ordinary non-MQA DAC.)
However, as part of the entire process, MQA also includes a bandwidth reduction that delivers a benefit equivalent to compression. This benefit is not especially useful for content stored on disc - but can be significant for content that is streamed.
In the end, with MQA you are getting "content that takes up less bandwidth than the original but also sounds different than the original"... All that remains is for you to decide whether you consider that difference in sound to be a benefit or a flaw...
An interesting thought is that, if the change in sound was the intended benefit, the "MQA correction encoder" could simply be built into a DAC or processor. This would enable the end user to get the benefit of the change in sound without having to acquire MQA enabled content... Hmmmmm.....
In any case, for many manufacturers of audio gear, the goal is to "fill their trophy wall with logos"... (Another term for this is "trying to be all things to all people".)
And, to be quite blunt, this is enough reason for many to add yet another audio format to the list of those they support... I don't know what percentage of people who purchase "MQA enabled" equipment actually listen to MQA-encoded content on it... (I suspect that there are also a few who can't hear the difference - but derive great satisfaction from seeing the little LED light up.) (But, then, I've also spoken to many people who do hear the difference, and DO NOT consider it to be an improvement.)
But I do know that overall demand for MQA among our customers is relatively low... (And this is even more true when asked: "Would you pay extra for it?") (And do remember that you can run the Tidal client on your computer - and enjoy the benefits of that first important unfold - on ANY OF OUR GEAR.)
Which is why we have chosen not to divert development resources, and licensing costs, to adding it to our products...
I sure hope not. It is an complete and utter grift. If you want hi-res streaming, try QOBUZ, Apple, Amazon... - Rich If I remember correctly the first unfold is pretty much a full 88.2/96khz 24bit without much if any change at all from a lossless version. It’s the second and third that does some changes. If it is worse than resampled (upsampled) hirez from Qubuz I can’t judge. But there should be a difference in the frequency. Just by what MQA is said to contain. TIDAL uses this for most of its content. Too me that only listens to the first unfold it sounds really good. I mostly asked if it is at all possible to implement. Maybe needs some extra hardware for the full rendering? There aren’t that many devices that do the full rendering, mostly really high end hifi gear. Strange that most high end brands have added MQA if its so bad?
|
|