|
Post by marcl on Jun 3, 2022 7:17:46 GMT -5
I joined the Audio Engineering Society about three years ago, mostly to get free access to papers, and also with the hope of going to local Philly Chapter meetings and the conference in NY occasionally. But alas, the first local meeting (March 2020) which was to be a presentation and demo of Atmos by Dolby folks at the Comcast Center in Philly, was cancelled due to Covid. Last night was the first in-person meeting since … a demo of the new Atmos production room at Montgomery County Community College (about 10min from home). This is a new $2M+ state of the art Atmos room that’s part of the facility with several other stereo rooms in support of a great audio production curriculum. The program was given by David Ivory, Director of Sound Recoding and Music Technology, with the help of a few of his students. After brief opening discussion of Atmos and the facility, we broke into smaller groups with one going to the Atmos studio for the demo while others toured the smaller studios. (oh and there were snacks too!) In the Atmos studio David described the equipment supporting the 7.1.4 system. All Genelec monitors, one of only nine SSL 9000K mixing consoles in North America … and lots of stuff with knobs! In the demo David showed the basics of Atmos mixing … i.e. “these are not your father’s pan-pots!” One monitor shows a 3D view of a virtual room with a listener outlined, and the “bed” channels along with sound objects are depicted in their locations in the space. He showed how you can move the sound objects to their location, and then adjust the “size” of the object … which mixes some of its sound into the bed for a more diffuse effect. Using the tracks from Queen’s Bohemian Rhapsody, he isolated sounds and showed how they were placed around the room. And finally, he played the whole track in Atmos. I enjoyed watching the 3D view which lit up each object in yellow when it was playing … don’t we wish we had this for movies! It was interesting that – having the “new toy” – it seemed the whole mix was happening above and behind me except for the lead singer. So when I got home I listened to it with Apple Music on MY 7.1.4 system, and the effect was the same. A couple other notes … It was interesting that the young students – and even to a degree David, an industry veteran – kept referring to the mind-blowing potential of going from 2-channel to Atmos. It’s as though 5.1 never happened … which, for most consumers and folks in the mainstream music industry … I guess it didn’t. So my questions about “we could have done a lot of this before with 5.1” just kind of fizzled amongst the excitement of “Atmos is a game-changer”. I did get to ask the question I really wanted to ask, after it was mentioned that the studio was set up by Genelec engineers with their digital room correction system GLM. I confirmed … they correct the room to a flat target … no upward ramp in the bass, no downward ramp in the treble … flat! www.genelec.com/calibration-acousticsI didn’t get a chance to ask David .. but I talked a bit with one guy – an engineer/producer from Atlanta – about the idea of using Atmos in a live recording situation to capture the actual ambience of the room and real positions of the instruments. Like, to make a recording that sounds like musicians playing together in a room! That look … the proverbial “Dan Quayle in headlights” …
|
|
|
Post by simpleman68 on Jun 3, 2022 7:59:54 GMT -5
I didn’t get a chance to ask David .. but I talked a bit with one guy – an engineer/producer from Atlanta – about the idea of using Atmos in a live recording situation to capture the actual ambience of the room and real positions of the instruments. Like, to make a recording that sounds like musicians playing together in a room! That look … the proverbial “Dan Quayle in headlights” … Super cool! I am an hour North of Philly but would love to be involved with something like this. I spend an hour each day reading up on audio "geekery". Been doing that for the last 10 years and really enjoy it.
I have often thought about the use of an Atmos based system to capture live recordings as you mentioned but the problem, as often is, limited market.
What a cool tour that must've been. Scott
|
|
|
Post by marcl on Jun 3, 2022 8:03:04 GMT -5
I didn’t get a chance to ask David .. but I talked a bit with one guy – an engineer/producer from Atlanta – about the idea of using Atmos in a live recording situation to capture the actual ambience of the room and real positions of the instruments. Like, to make a recording that sounds like musicians playing together in a room! That look … the proverbial “Dan Quayle in headlights” … Super cool! I am an hour North of Philly but would love to be involved with something like this. I spend an hour each day reading up on audio "geekery". Been doing that for the last 10 years and really enjoy it.
I have often thought about the use of an Atmos based system to capture live recordings as you mentioned but the problem, as often is, limited market.
What a cool tour that must've been. Scott
It was interesting, though I would have liked a little more structured tech talk. I think most of the attendees were involved in studio recording or live mixing so maybe it was assumed people know a lot already. Keep an eye out for these events. You didn't have to be an AES member to attend this one.
|
|
|
Post by AudioHTIT on Jun 3, 2022 9:07:58 GMT -5
I’ve started using Apple’s Logic Pro on my Mac for home recordings, I noticed the latest versions allow for the creation of Atmos / Spacial mixes, but I’m no where near ready to learn that, steep curve.
|
|
KeithL
Administrator
Posts: 10,255
|
Post by KeithL on Jun 3, 2022 9:13:53 GMT -5
I'm going to interject here with a sort of editorial comment... which may or may not be especially "on mark" depending on what you mean.
Dolby Atmos is an encoded audio DELIVERY format. As such, you DO NOT "record in Dolby Atmos". ("Dolby Atmos" is an output option on the recording console.)
You record multi-channel audio with as many microphones as you have... Mix and "engineer" those channels as you like... Then encode the finished mix into "an Atmos encoded delivery package"... (Which may involve additional "mixing options".)
At the point where the original recording is made Dolby Atmos isn't really involved yet. Although the recording engineer will probably make choices based on knowing that the final intent is a surround sound recording. (In many cases the result will be "different mixes" for stereo, and "regular surround", and perhaps Atmos.)
The point is that there would be no particular point in "recording the raw tracks in Atmos". At best what you would be doing would be recording the individual tracks... then encoding them via an Atmos encoder. Which would mean that you would LOSE flexibility because the encoder settings would be "baked in" to the recording. Instead a far better choice would be to simply record the original tracks... then mix them down and encode them later. I'm assuming that you're thinking of the possibility of making an Atmos equivalent of "a direct-to-disc recording". Unfortunately, with surround sound, there are many options, many of which cannot be accurately optimized in advance. That's a nice way of saying that, if you "just recorder what the microphones picked up", it's unlikely you would be very happy with the result. So it really makes far more sense to record all the original tracks... edit them... and encode them later at the END of the workflow.
Note that, with modern digital gear, it's not that cumbersome to record as many channels as you have coming in... Then you will have all of that information to take advantage of later. The "hard part", which is going to be the same both ways, is "handling" the microphones and inputs themselves. (You would actually be limiting your options by encoding the content as you record it initially.)
NOW... as a DELIVERY FORMAT... Dolby Atmos is an excellent choice... because it is designed to scale. Once you've created some Doilby Atmos content, you can play it on an Atmos capable system, or a 7.1 channel system, or a 5.1 channel system, or a stereo system, and it will work well on all of them.
I didn’t get a chance to ask David .. but I talked a bit with one guy – an engineer/producer from Atlanta – about the idea of using Atmos in a live recording situation to capture the actual ambience of the room and real positions of the instruments. Like, to make a recording that sounds like musicians playing together in a room! That look … the proverbial “Dan Quayle in headlights” … Super cool! I am an hour North of Philly but would love to be involved with something like this. I spend an hour each day reading up on audio "geekery". Been doing that for the last 10 years and really enjoy it. I have often thought about the use of an Atmos based system to capture live recordings as you mentioned but the problem, as often is, limited market. What a cool tour that must've been. Scott
|
|
|
Post by marcl on Jun 3, 2022 9:14:09 GMT -5
I’ve started using Apple’s Logic Pro on my Mac for home recordings, I noticed the latest versions allow for the creation of Atmos / Spacial mixes, but I’m no where near ready to learn that, steep curve. Wow that's really cool! I would love to be able to play around with something like that at home, even just to make special mixes to demonstrate imaging. I'm a drummer and I have a collection of percussion instruments in addition to drum set stuff. Those instruments could really make for very precise illustrations of location, size and clarity.
|
|
DYohn
Emo VIPs
Posts: 18,485
|
Post by DYohn on Jun 3, 2022 9:29:05 GMT -5
My experience working with companies like Dolby Labs is their engineering teams often function in silos, meaning I am not surprised that the Atmos engineers had not completely familiarized themselves with Pro Logic and related capabilities except for what they could reuse for their solution. I am surprised about the story that a recording engineer was not aware of all the spatial recording techniques and experiments that have occurred over the last 50 years, although of course using Atmos to accomplish it on the front end is likely not a reality.
|
|
KeithL
Administrator
Posts: 10,255
|
Post by KeithL on Jun 3, 2022 9:29:33 GMT -5
Bear in mind that Apple's "spacial thing" is additional information and processing that is layered on top of Atmos.
The Atmos mix has information about where each channel or object "appears in 3D space". The Apple "spatial" process then adjusts the viewpoint from which you're observing that 3D space.
To use a visual analogy... It's as if Atmos "stores the information to build a 3D model of what you're listening to"... Then the Apple spatial process "moves the location from which you're viewing the 3D model Atmos has created"...
Knowing Apple I would assume that they're "locking the two together, taking care of all the messy details, then sealing it all in a proprietary package you can't mess with".
It would be interesting to see whether they actually allow you to MAKE Atmos recordings. I'm betting that you cannot make Atmos recordings that can be played on Atmos compatible non-Apple gear. (Although that would be moot unless you have the hardware required to record multi-channel content anyway.)
I’ve started using Apple’s Logic Pro on my Mac for home recordings, I noticed the latest versions allow for the creation of Atmos / Spacial mixes, but I’m no where near ready to learn that, steep curve.
|
|
|
Post by marcl on Jun 3, 2022 9:49:27 GMT -5
I'm going to interject here with a sort of editorial comment... which may or may not be especially "on mark" depending on what you mean.
Dolby Atmos is an encoded audio DELIVERY format. As such, you DO NOT "record in Dolby Atmos". ("Dolby Atmos" is an output option on the recording console.)
You record multi-channel audio with as many microphones as you have... Mix and "engineer" those channels as you like... Then encode the finished mix into "an Atmos encoded delivery package"... (Which may involve additional "mixing options".)
At the point where the original recording is made Dolby Atmos isn't really involved yet. Although the recording engineer will probably make choices based on knowing that the final intent is a surround sound recording. (In many cases the result will be "different mixes" for stereo, and "regular surround", and perhaps Atmos.)
The point is that there would be no particular point in "recording the raw tracks in Atmos". At best what you would be doing would be recording the individual tracks... then encoding them via an Atmos encoder. Which would mean that you would LOSE flexibility because the encoder settings would be "baked in" to the recording. Instead a far better choice would be to simply record the original tracks... then mix them down and encode them later. I'm assuming that you're thinking of the possibility of making an Atmos equivalent of "a direct-to-disc recording". Unfortunately, with surround sound, there are many options, many of which cannot be accurately optimized in advance. That's a nice way of saying that, if you "just recorder what the microphones picked up", it's unlikely you would be very happy with the result. So it really makes far more sense to record all the original tracks... edit them... and encode them later at the END of the workflow.
Note that, with modern digital gear, it's not that cumbersome to record as many channels as you have coming in... Then you will have all of that information to take advantage of later. The "hard part", which is going to be the same both ways, is "handling" the microphones and inputs themselves. (You would actually be limiting your options by encoding the content as you record it initially.)
NOW... as a DELIVERY FORMAT... Dolby Atmos is an excellent choice... because it is designed to scale. Once you've created some Doilby Atmos content, you can play it on an Atmos capable system, or a 7.1 channel system, or a 5.1 channel system, or a stereo system, and it will work well on all of them.
Super cool! I am an hour North of Philly but would love to be involved with something like this. I spend an hour each day reading up on audio "geekery". Been doing that for the last 10 years and really enjoy it. I have often thought about the use of an Atmos based system to capture live recordings as you mentioned but the problem, as often is, limited market. What a cool tour that must've been. Scott
My poor choice of words ... what I SHOULD have said was to record the live performance with the intent of Atmos being the end result. And this was discussed a bit last night. For any recording where there is the potential to create an Atmos mix and deliver Atmos output, you certainly should do a lot of planning ahead of time so that the tracks are recorded in a way that will NOT limit your choices when producing the output. Now for some sources, there might be some preprocessing or premixing to produce the "object" that is used in the Atmos mix. It's also possible the bed could be recorded and mixed much like 5.1 always was. BTW something I learned was that with 128 potential objects in Atmos, there can be up to a 9.1 in the bed mix, which leaves 118 objects for placement in the 3D space. So regarding, my idea for live recording, it most definitely would be conventional recording in the sense of sending a microphone to a track in a recorder ... no actual "Atmos" stuff at that point. But you would plan for the output. First and foremost, set up the instruments as they play live. Let's use a jazz quartet as an example. Piano on the left at a 45 degree angle with the keyboard far left and lid open. Bass dead center. Saxophone in front and a bit to the right of the bass. Drum set on the right also at a 30-45 degree angle with hihat far right. Two mics on the piano, high and low register; one on the bass, and one on the sax; for the drums, one on the bass drum, two overheads above the cymbals, and a stereo Blumlein Pair (90deg) in front of the set to capture the drums (no close mics on the drum set other than the bass drum). Then more mics to capture the room. Located at a virtual listening position, an array of mics at two levels. At ear level, aimed at left and right walls, left and right rear corners, and a Blumlein pair pointing forward toward the band. Then about 4-6ft above the listening position, four more mics pointed up 45 degrees toward the L/R front and L/R rear of the room, 90degrees with respect to each other. In the Atmos mix, the six ear level mics become the bed. The four top mics become objects placed above the listening position. Then for the band .... place the sound from each mic as an object in the same relative location as it was during the recording. My theory is that the instruments appear in front at eye level just as they were in the recording, the bed and top objects recreate the room ambience as it would have been heard at the virtual listening position during the recording. And BTW I'm picturing a small performance space with the musicians spread across about 20ft and the listening position 15ft away. Best I could do 15 minutes talking to my friend Mike in the parking lot last night, awake at 4am pondering ... and details just now p.s. inspired by Morten Lindberg of 2L Records who makes some amazing recordings. He captures 5.0 on site. He also somehow does some Atmos and Auro3D but I don't know how he captures the sources for those.
|
|
|
Post by AudioHTIT on Jun 3, 2022 9:49:56 GMT -5
Bear in mind that Apple's "spacial thing" is additional information and processing that is layered on top of Atmos. The Atmos mix has information about where each channel or object "appears in 3D space". The Apple "spatial" process then adjusts the viewpoint from which you're observing that 3D space.
To use a visual analogy... It's as if Atmos "stores the information to build a 3D model of what you're listening to"... Then the Apple spatial process "moves the location from which you're viewing the 3D model Atmos has created"... Knowing Apple I would assume that they're "locking the two together, taking care of all the messy details, then sealing it all in a proprietary package you can't mess with". It would be interesting to see whether they actually allow you to MAKE Atmos recordings. I'm betting that you cannot make Atmos recordings that can be played on Atmos compatible non-Apple gear. (Although that would be moot unless you have the hardware required to record multi-channel content anyway.)
I’ve started using Apple’s Logic Pro on my Mac for home recordings, I noticed the latest versions allow for the creation of Atmos / Spacial mixes, but I’m no where near ready to learn that, steep curve. Currently there is quite a bit of Atmos material available on Apple Music, and if you have an Atmos system it triggers the Atmos processor. They use the term Spatial as an umbrella to cover both 3D / Headtracking with headphones and earbuds, and true Atmos. Logic Pro can now create true Atmos tracks, though again I haven’t yet done it. These pages describe both the Atmos recording capabilities, and the Dolby plug-in required to give Logic these capabilities. I don’t see any reason you couldn’t create (and export) a mix that could be played on a non-Apple system or player, but again I haven’t tried. support.apple.com/guide/logicpro/build-a-dolby-atmos-mix-lgcp713d1147/10.7.3/mac/11.0support.apple.com/guide/logicpro/dolby-atmos-plug-in-lgcp8e75f0b5/10.7.3/mac/11.0
|
|
|
Post by marcl on Jun 3, 2022 9:53:34 GMT -5
Bear in mind that Apple's "spacial thing" is additional information and processing that is layered on top of Atmos.
The Atmos mix has information about where each channel or object "appears in 3D space". The Apple "spatial" process then adjusts the viewpoint from which you're observing that 3D space.
To use a visual analogy... It's as if Atmos "stores the information to build a 3D model of what you're listening to"... Then the Apple spatial process "moves the location from which you're viewing the 3D model Atmos has created"...
Knowing Apple I would assume that they're "locking the two together, taking care of all the messy details, then sealing it all in a proprietary package you can't mess with".
It would be interesting to see whether they actually allow you to MAKE Atmos recordings. I'm betting that you cannot make Atmos recordings that can be played on Atmos compatible non-Apple gear. (Although that would be moot unless you have the hardware required to record multi-channel content anyway.)
I’ve started using Apple’s Logic Pro on my Mac for home recordings, I noticed the latest versions allow for the creation of Atmos / Spacial mixes, but I’m no where near ready to learn that, steep curve. There was some discussion about this last night too, the head turning bit. One engineer asking David if he had a way to preview Apple Spacial and the answer was not yet ... having to run a headphone output from the Mac in the equipment room ... and then some discussion of sending a ZIP file to your phone and listening to that, but it HAS to be a ZIP file ... yeah some stuff not fully worked out.
|
|
|
Post by marcl on Jun 3, 2022 10:01:02 GMT -5
My experience working with companies like Dolby Labs is their engineering teams often function in silos, meaning I am not surprised that the Atmos engineers had not completely familiarized themselves with Pro Logic and related capabilities except for what they could reuse for their solution. I am surprised about the story that a recording engineer was not aware of all the spatial recording techniques and experiments that have occurred over the last 50 years, although of course using Atmos to accomplish it on the front end is likely not a reality. I'm sure they knew people did 5.1 mixes but it was just so far from their experience mixing pop music. And in another discussion, the reaction was that almost nobody had a way to play back 5.1 music before and it had to be mixed to channels and so it was hard to do and deliver ... and nobody cared. But of course some people had 5.1 home theaters ... different people in a different room in the house too ... But NOW ... Apple does the marketing and really NOBODY is expected to have a 5.1.4 system to listen to ... it's marketed to ear buds and sound bars ... so NOW, since it's objects and not channels, and you don't need 10-12 speakers ... it's a thing! Not to mention the fact that as long as it SAYS Atmos or Spatial on the playback device, people are happy ... irrespective of whether they're actually hearing that.
|
|
KeithL
Administrator
Posts: 10,255
|
Post by KeithL on Jun 3, 2022 10:05:59 GMT -5
Dolby Atmos is a format intended for DELIVERING "immersive multi-channel spatial content". In other words, it assumes that you are starting with specific tracks or sounds, which you wish to position at certain spots in the room. If you read the documentation for the Dolby Atmos mastering applications you will see that this is what it's designed to do. The mastering apps literally allow you to place sounds in a 3D representation of a room - much like positioning objects in a 3D drawing program. (While Atmos mastering offers very flexible control over "what you put where" it is agnostic about the choices you make.)
If you chose to do so you could quite literally "place the audio track recorded by each microphone in its original position". (You could do this "by using all bed channels and no objects" or by using "pinned objects" or a combination of both.)
(This is what we mean when we talk about "pinned objects"... which most people generally think of as being a bad thing).
The other extreme would be to record each member of the orchestra with his or her own microphone.
(In this case each member of the orchestra "would be a separate object".)
(You could then "rearrange the seating after the fact" just like you can move objects around in your favorite drawing program.)
In practice virtually all Atmos recordings consist of some combination of the two. (For example you might "put the orchestra in the bed channels, but handle the soloist as an object, so you can move her around relative to the orchestra".)
The Dolby Surround Upmixer (DSU), which you might consider to be the modern replacement for ProLogic, has an entirely different purpose. It is designed to create additional channels based on an intelligent consideration of what you're starting out with. It looks at the channels you have and "tries to decide how to rearrange what's in them to make more channels". And it tries to do so "in a way that would be both pleasing and hopefully somewhat consistent with the original source".
It makes these decisions based on both "educated guesses" about the original source and "artistic guesses" about "what would sound nice".
So, for example, if you send it a stereo recording, with a singer's voice equally in both channels, it puts that singer in the center channel.
(Based on the idea that "it makes sense that's where that singer originally was and where she belongs".) (Of course this could be expanded to include "making up stuff that WASN'T originally there because it sounds cool".)
However my point is that, while there may be some overlap, these really are two entirely different goals.
ProLogic was originally considered to be an encoding format.
Multiple channels can be encoded by mixing them together in a way such that the decoder will handle them in a certain way. You are basically pre-processing the original content so that it "tricks" the decoder into putting out what you want.
This is somewhat "non-deterministic"... which is a fancy way of saying that you don't always get back exactly what you put in... but usually pretty close.
(But it is NOT the same as modern encoding formats - like Dolby Digital or Dolby Atmos - where you DO get back exactly what you're supposed to.)
My experience working with companies like Dolby Labs is their engineering teams often function in silos, meaning I am not surprised that the Atmos engineers had not completely familiarized themselves with Pro Logic and related capabilities except for what they could reuse for their solution. I am surprised about the story that a recording engineer was not aware of all the spatial recording techniques and experiments that have occurred over the last 50 years, although of course using Atmos to accomplish it on the front end is likely not a reality.
|
|
|
Post by marcl on Jun 3, 2022 10:20:21 GMT -5
Dolby Atmos is a format intended for DELIVERING "immersive multi-channel spatial content". In other words, it assumes that you are starting with specific tracks or sounds, which you wish to position at certain spots in the room. If you read the documentation for the Dolby Atmos mastering applications you will see that this is what it's designed to do. The mastering apps literally allow you to place sounds in a 3D representation of a room - much like positioning objects in a 3D drawing program. (While Atmos mastering offers very flexible control over "what you put where" it is agnostic about the choices you make.)
If you chose to do so you could quite literally "place the audio track recorded by each microphone in its original position". (You could do this "by using all bed channels and no objects" or by using "pinned objects" or a combination of both.)
(This is what we mean when we talk about "pinned objects"... which most people generally think of as being a bad thing).
The other extreme would be to record each member of the orchestra with his or her own microphone.
(In this case each member of the orchestra "would be a separate object".)
(You could then "rearrange the seating after the fact" just like you can move objects around in your favorite drawing program.)
In practice virtually all Atmos recordings consist of some combination of the two. (For example you might "put the orchestra in the bed channels, but handle the soloist as an object, so you can move her around relative to the orchestra".)
The Dolby Surround Upmixer (DSU), which you might consider to be the modern replacement for ProLogic, has an entirely different purpose. It is designed to create additional channels based on an intelligent consideration of what you're starting out with. It looks at the channels you have and "tries to decide how to rearrange what's in them to make more channels". And it tries to do so "in a way that would be both pleasing and hopefully somewhat consistent with the original source".
It makes these decisions based on both "educated guesses" about the original source and "artistic guesses" about "what would sound nice".
So, for example, if you send it a stereo recording, with a singer's voice equally in both channels, it puts that singer in the center channel.
(Based on the idea that "it makes sense that's where that singer originally was and where she belongs".) (Of course this could be expanded to include "making up stuff that WASN'T originally there because it sounds cool".)
However my point is that, while there may be some overlap, these really are two entirely different goals.
ProLogic was originally considered to be an encoding format.
Multiple channels can be encoded by mixing them together in a way such that the decoder will handle them in a certain way. You are basically pre-processing the original content so that it "tricks" the decoder into putting out what you want.
This is somewhat "non-deterministic"... which is a fancy way of saying that you don't always get back exactly what you put in... but usually pretty close.
(But it is NOT the same as modern encoding formats - like Dolby Digital or Dolby Atmos - where you DO get back exactly what you're supposed to.)
My experience working with companies like Dolby Labs is their engineering teams often function in silos, meaning I am not surprised that the Atmos engineers had not completely familiarized themselves with Pro Logic and related capabilities except for what they could reuse for their solution. I am surprised about the story that a recording engineer was not aware of all the spatial recording techniques and experiments that have occurred over the last 50 years, although of course using Atmos to accomplish it on the front end is likely not a reality. Yeah we're leapfrogging each other but in basically the same direction! You can record the instruments and make them objects and still end up with something resembling the actual three dimensional space ... IF you also capture the ambience. The thing that has been lost in the fog of studio recordings - but quite ironically is still a topic of discussion when people listen to speakers and even amps and DACs - is the idea of imaging and soundstage as a representation of the recording. In a studio there is no "there" there when each source is a mono recording in isolation, arranged in a line left to right across the table from the mix engineer. There is neither actual ambient space, nor the combined simultaneous reflections of all of the instruments simultaneously playing in that space. I always thought of Prologic and the current upmixers as mostly trying to recreate the ambience in those additional channels, in addition to the obvious anchoring of content common to L/R in the center channel and creating the LFE from the bass. What's interesting is that with some two channel sources it works surprisingly well ... most often with movies but also music. But when it doesn't work, it's usually very "Produced" studio recordings of music where very little ends up in the other channels and the soundstage collapses to virtual mono in the center ... very odd!
|
|
KeithL
Administrator
Posts: 10,255
|
Post by KeithL on Jun 3, 2022 10:36:28 GMT -5
Yes... it's pretty cool... and there is a lot to it... and it sounds like you and your friend have an excellent handle on it. That sounds like an excellent strategy... and your friend is way ahead of me on the specific details of doing live recordings. (I think too many people try to overdo it... )
What I find in discussions is that many people seem to lack an understanding of what's involved in both recording and "remastering" in Atmos.
For example - start with an old WWII movie... with a naval battle scene... with planes flying around overhead. With LUCK the original tracks still exist... and there are one or two "fighter plane tracks"... So, when you remix it, you can put the planes into a couple of big objects overhead, and maybe move them around a bit... And, if those planes are in groups, or only one or two at a time, you can even try to match the audio objects with the visuals...
However, if you want to have different planes flying in different directions, you're going to need to create a whole bunch of new tracks. You don't HAVE separate tracks of the separate planes to stick into multiple separate objects. And the best you're going to synthesize is a blur (because no decoder is going to figure out which harmonics should go with which engine noise and get it perfectly right).
And, unless it's a real classic, and worth spending millions to fully re-do the audio, that re-master is going to be a compromise. (At the very least it's going to be labor intensive... it's going to take careful attention by a live human being to get it even mostly right.)
For live audio recording....
The one bit of advice I would add - for people who haven't done this sort of thing before - is that theory and practice are often quite different.
For example it makes intuitive sense that "you could take a good digital recorder, set it up with a pair of microphones, in a room, and make a decent stereo recording". However, in practice, it quite often fails miserably to work out that way. You frequently end up with something that sounds VERY different than what you remember hearing from a few feet away. And it sometimes takes an amazing amount of processing, adjusting, and correcting, to get anywhere close to the original reality.
(I've heard similar claims about women and makeup: "It takes a real expert to apply makeup so it looks like you aren't wearing makeup.")
There are also some (a bunch) of hidden limitations with Atmos... For example... those objects take up space... and bandwidth...
Use a bunch of objects for a short time and it's not a problem... But use too many objects, for too much of the time, and there's a real risk that you won't have room on the disc for your movie, or that you'll exceed the allowable total bandwidth for UHD Blu-Ray. Or you might end up having to decide whether to sacrifice video quality or audio quality so they both fit.
(Which means that you can use a lot more for an audio-only disc where you don't have to worry about the space or bandwidth required by the video.) The documentation for the Atmos mastering apps is actually publicly available for anyone who wants to read all the fun details (you can Google for the current links)..
(But it's pretty dense unless you actually have a use for it.)
It's also worth noting that the documentation applies to "cinema Atmos"... And the home version has OTHER limits and limitations...
(Many people seem to be under the impression that the same details apply to both... they do not.)
I'm going to interject here with a sort of editorial comment... which may or may not be especially "on mark" depending on what you mean.
Dolby Atmos is an encoded audio DELIVERY format. As such, you DO NOT "record in Dolby Atmos". ("Dolby Atmos" is an output option on the recording console.)
You record multi-channel audio with as many microphones as you have... Mix and "engineer" those channels as you like... Then encode the finished mix into "an Atmos encoded delivery package"... (Which may involve additional "mixing options".) At the point where the original recording is made Dolby Atmos isn't really involved yet. Although the recording engineer will probably make choices based on knowing that the final intent is a surround sound recording. (In many cases the result will be "different mixes" for stereo, and "regular surround", and perhaps Atmos.) The point is that there would be no particular point in "recording the raw tracks in Atmos". At best what you would be doing would be recording the individual tracks... then encoding them via an Atmos encoder. Which would mean that you would LOSE flexibility because the encoder settings would be "baked in" to the recording. Instead a far better choice would be to simply record the original tracks... then mix them down and encode them later. I'm assuming that you're thinking of the possibility of making an Atmos equivalent of "a direct-to-disc recording". Unfortunately, with surround sound, there are many options, many of which cannot be accurately optimized in advance. That's a nice way of saying that, if you "just recorder what the microphones picked up", it's unlikely you would be very happy with the result. So it really makes far more sense to record all the original tracks... edit them... and encode them later at the END of the workflow.
Note that, with modern digital gear, it's not that cumbersome to record as many channels as you have coming in... Then you will have all of that information to take advantage of later. The "hard part", which is going to be the same both ways, is "handling" the microphones and inputs themselves. (You would actually be limiting your options by encoding the content as you record it initially.)
NOW... as a DELIVERY FORMAT... Dolby Atmos is an excellent choice... because it is designed to scale. Once you've created some Doilby Atmos content, you can play it on an Atmos capable system, or a 7.1 channel system, or a 5.1 channel system, or a stereo system, and it will work well on all of them.
My poor choice of words ... what I SHOULD have said was to record the live performance with the intent of Atmos being the end result. And this was discussed a bit last night. For any recording where there is the potential to create an Atmos mix and deliver Atmos output, you certainly should do a lot of planning ahead of time so that the tracks are recorded in a way that will NOT limit your choices when producing the output. Now for some sources, there might be some preprocessing or premixing to produce the "object" that is used in the Atmos mix. It's also possible the bed could be recorded and mixed much like 5.1 always was. BTW something I learned was that with 128 potential objects in Atmos, there can be up to a 9.1 in the bed mix, which leaves 118 objects for placement in the 3D space. So regarding, my idea for live recording, it most definitely would be conventional recording in the sense of sending a microphone to a track in a recorder ... no actual "Atmos" stuff at that point. But you would plan for the output. First and foremost, set up the instruments as they play live. Let's use a jazz quartet as an example. Piano on the left at a 45 degree angle with the keyboard far left and lid open. Bass dead center. Saxophone in front and a bit to the right of the bass. Drum set on the right also at a 30-45 degree angle with hihat far right. Two mics on the piano, high and low register; one on the bass, and one on the sax; for the drums, one on the bass drum, two overheads above the cymbals, and a stereo Blumlein Pair (90deg) in front of the set to capture the drums (no close mics on the drum set other than the bass drum). Then more mics to capture the room. Located at a virtual listening position, an array of mics at two levels. At ear level, aimed at left and right walls, left and right rear corners, and a Blumlein pair pointing forward toward the band. Then about 4-6ft above the listening position, four more mics pointed up 45 degrees toward the L/R front and L/R rear of the room, 90degrees with respect to each other. In the Atmos mix, the six ear level mics become the bed. The four top mics become objects placed above the listening position. Then for the band .... place the sound from each mic as an object in the same relative location as it was during the recording. My theory is that the instruments appear in front at eye level just as they were in the recording, the bed and top objects recreate the room ambience as it would have been heard at the virtual listening position during the recording. And BTW I'm picturing a small performance space with the musicians spread across about 20ft and the listening position 15ft away. Best I could do 15 minutes talking to my friend Mike in the parking lot last night, awake at 4am pondering ... and details just now p.s. inspired by Morten Lindberg of 2L Records who makes some amazing recordings. He captures 5.0 on site. He also somehow does some Atmos and Auro3D but I don't know how he captures the sources for those. <button disabled="" class="c-attachment-insert--linked o-btn--sm">Attachment Deleted</button><button disabled="" class="c-attachment-insert--linked o-btn--sm">Attachment Deleted</button>
|
|
KeithL
Administrator
Posts: 10,255
|
Post by KeithL on Jun 3, 2022 10:40:28 GMT -5
Yup... and the solution is often to ignore any original ambience... and simply "manufacture" new ambience after the fact.
Which sometimes works out quite well.. and sometimes doesn't.
The apps we have now have progresses way beyond the old days (like using a convolver to replicate the acoustics of a specific venue).
And, with studio tracks, and no original ambience, there may not be a choice.
I think the biggest mistake we see nowadays is when the "ambience" - whether natural or manufactured - "doesn't line up right".
You end up with the acoustic equivalent of trying to project a movie onto patterned wallpaper.
Dolby Atmos is a format intended for DELIVERING "immersive multi-channel spatial content". In other words, it assumes that you are starting with specific tracks or sounds, which you wish to position at certain spots in the room. If you read the documentation for the Dolby Atmos mastering applications you will see that this is what it's designed to do. The mastering apps literally allow you to place sounds in a 3D representation of a room - much like positioning objects in a 3D drawing program. (While Atmos mastering offers very flexible control over "what you put where" it is agnostic about the choices you make.)
If you chose to do so you could quite literally "place the audio track recorded by each microphone in its original position". (You could do this "by using all bed channels and no objects" or by using "pinned objects" or a combination of both.)
(This is what we mean when we talk about "pinned objects"... which most people generally think of as being a bad thing).
The other extreme would be to record each member of the orchestra with his or her own microphone.
(In this case each member of the orchestra "would be a separate object".)
(You could then "rearrange the seating after the fact" just like you can move objects around in your favorite drawing program.)
In practice virtually all Atmos recordings consist of some combination of the two. (For example you might "put the orchestra in the bed channels, but handle the soloist as an object, so you can move her around relative to the orchestra".)
The Dolby Surround Upmixer (DSU), which you might consider to be the modern replacement for ProLogic, has an entirely different purpose. It is designed to create additional channels based on an intelligent consideration of what you're starting out with. It looks at the channels you have and "tries to decide how to rearrange what's in them to make more channels". And it tries to do so "in a way that would be both pleasing and hopefully somewhat consistent with the original source".
It makes these decisions based on both "educated guesses" about the original source and "artistic guesses" about "what would sound nice".
So, for example, if you send it a stereo recording, with a singer's voice equally in both channels, it puts that singer in the center channel.
(Based on the idea that "it makes sense that's where that singer originally was and where she belongs".) (Of course this could be expanded to include "making up stuff that WASN'T originally there because it sounds cool".)
However my point is that, while there may be some overlap, these really are two entirely different goals.
ProLogic was originally considered to be an encoding format.
Multiple channels can be encoded by mixing them together in a way such that the decoder will handle them in a certain way. You are basically pre-processing the original content so that it "tricks" the decoder into putting out what you want.
This is somewhat "non-deterministic"... which is a fancy way of saying that you don't always get back exactly what you put in... but usually pretty close.
(But it is NOT the same as modern encoding formats - like Dolby Digital or Dolby Atmos - where you DO get back exactly what you're supposed to.)
Yeah we're leapfrogging each other but in basically the same direction! You can record the instruments and make them objects and still end up with something resembling the actual three dimensional space ... IF you also capture the ambience. The thing that has been lost in the fog of studio recordings - but quite ironically is still a topic of discussion when people listen to speakers and even amps and DACs - is the idea of imaging and soundstage as a representation of the recording. In a studio there is no "there" there when each source is a mono recording in isolation, arranged in a line left to right across the table from the mix engineer. There is neither actual ambient space, nor the combined simultaneous reflections of all of the instruments simultaneously playing in that space. I always thought of Prologic and the current upmixers as mostly trying to recreate the ambience in those additional channels, in addition to the obvious anchoring of content common to L/R in the center channel and creating the LFE from the bass. What's interesting is that with some two channel sources it works surprisingly well ... most often with movies but also music. But when it doesn't work, it's usually very "Produced" studio recordings of music where very little ends up in the other channels and the soundstage collapses to virtual mono in the center ... very odd!
|
|
|
Post by marcl on Jun 3, 2022 11:18:40 GMT -5
+ keithl Found a couple more photos in liner notes ... and I think this one shows how Lindberg does the height recording ... I knew I saw that somewhere! This is a Jared Sacks' recording on Just Listen Records, doing 5.0 in a live space (The Duke Book ... just fluegelhorn and drums)
|
|
ttocs
Global Moderator
I always have a wonderful time, wherever I am, whomever I'm with. (Elwood P Dowd)
Posts: 8,142
|
Post by ttocs on Jun 3, 2022 11:52:41 GMT -5
Yeah we're leapfrogging each other but in basically the same direction! You can record the instruments and make them objects and still end up with something resembling the actual three dimensional space ... IF you also capture the ambience. The thing that has been lost in the fog of studio recordings - but quite ironically is still a topic of discussion when people listen to speakers and even amps and DACs - is the idea of imaging and soundstage as a representation of the recording. In a studio there is no "there" there when each source is a mono recording in isolation, arranged in a line left to right across the table from the mix engineer. There is neither actual ambient space, nor the combined simultaneous reflections of all of the instruments simultaneously playing in that space. I always thought of Prologic and the current upmixers as mostly trying to recreate the ambience in those additional channels, in addition to the obvious anchoring of content common to L/R in the center channel and creating the LFE from the bass. What's interesting is that with some two channel sources it works surprisingly well ... most often with movies but also music. But when it doesn't work, it's usually very "Produced" studio recordings of music where very little ends up in the other channels and the soundstage collapses to virtual mono in the center ... very odd! Loving this discussion! I love albums recorded with a Live feel to them where the musicians aren't segregated from each other. One great example of a track that captures the energy from the musicians as well as the whoops and hollers is Anonymus Two by Focus. At 8:50 the snare springs are tightened up near the end of a great bass solo so the snare then resonates to the sound of the bass guitar. Some might say this was a mistake, but I thoroughly enjoy the liveness of it all. Focus videotaped some of their recording sessions which show them all pretty close together in the same room playing off each other with lots of energy captured.
|
|
KeithL
Administrator
Posts: 10,255
|
Post by KeithL on Jun 3, 2022 12:04:17 GMT -5
From a quick look it looks like "you can export your Atmos mix in a format which you can submit to Apple music"... And Apple says that this format is also supported by other online music services... This should mean that you can create Atmos audio and submit it to online services... (I'm not clear on whether that content could be played on anything other than "devices that support Apple immersive content" either... but that could be yes.)
But does not necessarily mean that you can play that file on something else... like your Blu-Ray player.. so that part is doubtful. Also note that, in order to do so, you must AUTHOR the content.
So, for example, if you wanted a live recording, with real room ambience included, you would need to use multiple microphones, at multiple locations. If you simply record a two-channel recording and then "convert it" you will simply have an Atmos file containing the output of whatever converter or upmixer you use. It's going to guess at some stuff it's going to put in the height channels but it won't necessarily be what "should" be there...
In order to get that you would need to record that actual content, with actual microphones, then mix it into the Atmos mix - for real.
(Either way I assume it would work with their head positioning stuff).
Also note that, while online services will probably allow you to submit ORIGINAL content.... they may or may not allow you to upload converted content. (After all, since you don't own it, you don't have a legal right to do so... and they may or may not notice.)
Bear in mind that Apple's "spacial thing" is additional information and processing that is layered on top of Atmos. The Atmos mix has information about where each channel or object "appears in 3D space". The Apple "spatial" process then adjusts the viewpoint from which you're observing that 3D space.
To use a visual analogy... It's as if Atmos "stores the information to build a 3D model of what you're listening to"... Then the Apple spatial process "moves the location from which you're viewing the 3D model Atmos has created"... Knowing Apple I would assume that they're "locking the two together, taking care of all the messy details, then sealing it all in a proprietary package you can't mess with". It would be interesting to see whether they actually allow you to MAKE Atmos recordings. I'm betting that you cannot make Atmos recordings that can be played on Atmos compatible non-Apple gear. (Although that would be moot unless you have the hardware required to record multi-channel content anyway.)
Currently there is quite a bit of Atmos material available on Apple Music, and if you have an Atmos system it triggers the Atmos processor. They use the term Spatial as an umbrella to cover both 3D / Headtracking with headphones and earbuds, and true Atmos. Logic Pro can now create true Atmos tracks, though again I haven’t yet done it. These pages describe both the Atmos recording capabilities, and the Dolby plug-in required to give Logic these capabilities. I don’t see any reason you couldn’t create (and export) a mix that could be played on a non-Apple system or player, but again I haven’t tried. support.apple.com/guide/logicpro/build-a-dolby-atmos-mix-lgcp713d1147/10.7.3/mac/11.0support.apple.com/guide/logicpro/dolby-atmos-plug-in-lgcp8e75f0b5/10.7.3/mac/11.0
|
|
|
Post by marcl on Jun 3, 2022 15:48:41 GMT -5
Morten Lindberg from 2L Music talks about his Atmos/Auro3D immersive studio with Genelec speakers. Interestingly, he talks about familiarizing composers and musicians with the immersive audio experience in advance, which may even affect the composition! And ... to the topic of live recording with immersive delivery in mind ... he shows and describes how he records to capture the sound of the music and the room. A small microphone array with exact same time of arrival, center stage. Musicians are arranged around the microphone array. Balance is achieved by moving the musicians. EQ is accomplished by adjusting the angle of microphones. "All the mixing is done at the recording" Mastering is simple level control and formatting for distribution. 2L Music shop.2l.no/
|
|