stiehl11
Emo VIPs
Give me available light!
Posts: 7,261
|
Post by stiehl11 on Jan 6, 2013 13:02:34 GMT -5
I still don't see a real world analog of the condition that you're describing. While I agree that the speaker has to send to your ears what was recorded; this is a function of the drivers, not their location. At 340 meters per second (3 football fields) your ears are not going to hear a difference between a few fractions of an inch relative to time. There may hear a volume difference that is corrected by... volume.
A way to think of what I'm saying is that if you have two instruments and have them sitting/standing on the same plane/axis while playing. For this example I'll use two trumpets. Move the bell of one of the trumpets back 1/2 an inch. Can you tell the difference? Now, while in that second position, have the back trumpet play slightly louder. Compare that to the original sound. Can you hear a difference between the 1st test and the 3rd (pending that you could hear a difference between the 1st and the 2nd)? You can do this as many times as you'd like with different distances.
Another way to think of this is that you can divide the stage in the picture that I shown above in half and have the right half move their chairs forward or back several inches. If there was an audible difference all a good musician would have to do is play slightly louder or softer and I would bet my paycheck there would be no difference in the sound to the listener in the audience or the sound reaching the microphone recording it.
The bell of a trumpet is no different than a speaker cone: sound emanates from it. No instrument produces a sine wave so each tone played, whether it be from the bell of a lone instrument or a cone of your speaker contains multiple frequencies. While soundstage is important, it is a function of mixing a multi-channel signal and reproduction of that signal by your speakers, not (necessarily) location of the individual drivers of your speaker.
|
|
|
Post by Boomzilla on Jan 6, 2013 13:35:07 GMT -5
Factors include distance and time. They're related because it takes time for sound to travel the distance. The original performance contains a wealth of time-linked information. This information includes the echoes from the walls of the original performance venue and the displacement of the original performers. When that information is REproduced by loudspeakers, providing that the microphone placement was simple, providing that the speakers aren't adding their own time displacement, and providing that the listening room isn't adding its own echoes (yes, it's lots of "providings"), then you can hear the actual venue where the recording is made.
This is significantly complicated by the fact that there is no "standard" microphone placement for audio recording, and by the fact that the audio mastering studios can and do mix sound in ways that has no relation to the actual recording venue.
So to summarize, it's amazing that you can hear any stereo "image" at all in your listening room. That said, any opportunity to minimize damage to the signal is one that should be taken. Therefore, at least in theory, I guess that speakers should, indeed, be time-aligned to the extent possible.
|
|
|
Post by AudioHTIT on Jan 6, 2013 13:51:43 GMT -5
stiehl - You can't use the creation of music and sound in different planes to refute the reproduction of that music in a single plane. The combination of sounds created by those instruments all has to eventually reach two points (your ears), or in the case of a recording some number of microphones. Whatever the signal and phase relationships between the instruments must (as much as possible) be preserved throughout the recording / playback process and finally reproduced by your speakers. Granted you may not be able to hear the difference of a single instrument (like your trumpet example), but when trying to reproduce the entire orchestra spatially correct in your listening room, then it's one of the elements of accurate reproduction and good imaging. Your speakers having flat drivers does seem a legitimate way to maintain phase / time alignment (all other things being equal). No poppycock!
|
|
stiehl11
Emo VIPs
Give me available light!
Posts: 7,261
|
Post by stiehl11 on Jan 6, 2013 13:52:12 GMT -5
Are we talking fractions of an inch or are we talking yards/meters? Your statement is true in the example that rasputin gives and over (relatively) large distances. While the difference can be measured, I find it hard to believe that it is audible to the listener. Again, if someone can show me an example of time alignment making a difference in anything other than speakers I'll admit that I'm wrong. As a former performing musician, someone who ran the recording booth at college (for some of the performances I wasn't in), and someone who regularly attends live performances you will have to give me a real world analog to what you are proposing.
|
|
stiehl11
Emo VIPs
Give me available light!
Posts: 7,261
|
Post by stiehl11 on Jan 6, 2013 14:20:07 GMT -5
stiehl - You can't use the creation of music and sound in different planes to refute the reproduction of that music in a single plane. The combination of sounds created by those instruments all has to eventually reach two points (your ears), or in the case of a recording some number of microphones. Whatever the signal and phase relationships between the instruments must (as much as possible) be preserved throughout the recording / playback process and finally reproduced by your speakers. Granted you may not be able to hear the difference of a single instrument (like your trumpet example), but when trying to reproduce the entire orchestra spatially correct in your listening room, then it's one of the elements of accurate reproduction and good imaging. Your speakers having flat drivers does seem a legitimate way to maintain phase / time alignment (all other things being equal). No poppycock! Imaging comes down to the recording and the mix more so than your speakers. The situation you are referring to is real and it is measurable but is not perceivable. It would be most noticeable (at the speaker level) in a mono recording where the sound emanating from each speaker was exactly the same. At that point our ears start picking up on the difference in amplitude of the frequencies of the sound wave as well as attenuation of the sound wave... not which sound started first/last (time). As we know, in a stereo recording that is not the case. Imaging left and right is a function of the mix between two or more channels. Fore and aft is a function of volume. Case in point; a tuba hot mic'd (or playing close to a mic) will sound different than a tuba playing several feet away from a mic. When you compare the sound waves between the near and far tuba (easily done if you're using a computer to analyse) the biggest difference you will see an amplitude difference across the frequency range with the middle frequencies having the least delta in comparison to the higher and lower frequencies, not a phase/time difference. This is how our ear understands distance and how it is reproduced on recordings. And my example, if not clear, used two trumpets... not one. Just as most music reproduction uses two speakers. And, it was a matter of a half an inch, not several yards/meters. I'm also assuming that we are talking about two sound waves starting at the same time and hitting our ears at different times due to distance versus two sound waves starting a different times (as a function of delay by the crossovers for example).
|
|
|
Post by Chuck Elliot on Jan 6, 2013 17:03:11 GMT -5
Another thought in loudspeaker design that minimizes the interaction of drivers is to increase the crossover slope. A speaker with a 12dB crossover will allow the adjacent drivers to interact over a much wider bandwidth than a 24db crossover. The down side is cost as the larger slope crossover will be much more expensive.
|
|
klinemj
Emo VIPs
Honorary Emofest Scribe
Posts: 14,755
|
Post by klinemj on Jan 6, 2013 19:12:57 GMT -5
One thing I just noticed...the question from the OP was, "Does time alignment really matter?"
And while the images in the OP showed speakers, the question was broader to me, so that's how I answered the poll. Yet, we're all focused on speakers and whether or not minor differences in driver locations or other factors within a speaker would make a difference.
Even though I responded once using the term "speaker time alignment", when I responded to the poll - I was thinking in the broader sense.
In the broad sense of a total system, there's no doubt to me that time alignment matters...time alignment being - does the "system" (speakers and all, including the room...) result in the right sounds reaching my ears at the time time - including being balanced between my right and left ears. If the volume of given frequencies do not reach my ears at the right times, then there is no way I will be hearing what the recording is supposed to be sharing. And, I will not hear the right spatial effects that should be presented.
Now, how many milliseconds does it have to be off for me to be able to notice? I have no idea.
So, why do I believe that it matters? I just went from a very good 2-channel system to a different even better 2-channel system. The biggest change is clarity of soundstage. I can hear things separated in time and location far more clearly than even on recordings with which I am very familiar...and nothing is different about my room. And no traditional specs really seem to explain the order of magnitude change I have experienced. Yet - I hear it. On faith, I think there's something different about the time-accuracy, in addition to left/right dB accuracy that explains it.
I may be wrong...I have no data, but the theory makes sense to me.
So...thinking more broadly that speakers and drivers...what do others think? Does time alignment/accuracy matter or not?
Mark
|
|
|
Post by Boomzilla on Jan 6, 2013 19:40:43 GMT -5
Time alignment (in the original question) was intended to apply to home loudspeakers only.
Misalignment of sound sources is a much different question. For symphony orchestra recording, multiple microphones are most often used (so I understand) and then synchronized while mixing down to two tracks. The "plane of the microphones" (if there is one) is the original capture of the sounds, which, as previously pointed out, are already "out of synch" because the instruments are all different distances from the microphones.
The recording process is further tampered with by the use of "spot" microphones that are used to capture the soloist, etc. The mixing board then has the chance to mangle the signal with equalization (often different for various microphones), time alteration (the recording engineer may opt to delay various feeds in order to create a mix that the engineer deems more coherent), and phase interference between the different feeds mixed to the same track on the final "stereo" output.
The mastering engineer then has the option to mangle the sound again as it is equalized for final distribution. It is possible to create completely synthetic "original acoustics" completely unrelated to any real space. An example of this is "Q-Sound" as evidenced by Madonna's "Immaculate Collection" CD.
None of this can be controlled from the consumer end. What you pay for is what you get.
In your own playback space, further damage is done to the sound... The individual woofers, midrange drivers, and tweeters in the loudspeakers induce phase errors in the sound. Think of this as a ratio issue. In the recording hall, the few feet between instruments is often dwarfed by the distances from the instruments to the microphones. In the home, however, the inches of distance between the plane of the drivers is a relatively larger proportion of the distance between the speakers and the listening position.
Further, the echoes of the listening room are always superimposed over the recorded sound of the original acoustic. Because these patterns are completely unrelated, any listening room echo is always destructive to the recorded sound of the original acoustic (if any is there).
It seems to be the consensus of speaker designers (and voters on this forum) that having the drivers of the loudspeaker time aligned does less damage to the program material. Although it is probably so, the plethora of other variables that I've just mentioned ensure that even perfectly time-aligned loudspeaker pairs are no guarantee of an accurate window on the original sound.
|
|
|
Post by AudioHTIT on Jan 6, 2013 19:46:55 GMT -5
One thing I just noticed...the question from the OP was, "Does time alignment really matter?" And while the images in the OP showed speakers, the question was broader to me, so that's how I answered the poll. Funny, I use the ProBoards iPad app and didn't even realize this was a poll, guess I need to switch over to Safari and vote. Now I better understand DYohn's comment.
|
|
|
Post by Boomzilla on Jan 6, 2013 19:51:55 GMT -5
Time-alignment IS phase... Agreed!
|
|
|
Post by Jim on Jan 6, 2013 20:23:58 GMT -5
I think generally speaking yes.. I don't think a ms or two will be audible, but I think generally speaking, it does matter.
Aligning to within inches? I'm not sure if it matters THAT much (like sloped baffles to time align each and every driver).
|
|
stiehl11
Emo VIPs
Give me available light!
Posts: 7,261
|
Post by stiehl11 on Jan 6, 2013 20:31:51 GMT -5
Time alignment (in the original question) was intended to apply to home loudspeakers only. Misalignment of sound sources is a much different question. For symphony orchestra recording, multiple microphones are most often used (so I understand) and then synchronized while mixing down to two tracks. The "plane of the microphones" (if there is one) is the original capture of the sounds, which, as previously pointed out, are already "out of synch" because the instruments are all different distances from the microphones. The recording process is further tampered with by the use of "spot" microphones that are used to capture the soloist, etc. The mixing board then has the chance to mangle the signal with equalization (often different for various microphones), time alteration (the recording engineer may opt to delay various feeds in order to create a mix that the engineer deems more coherent), and phase interference between the different feeds mixed to the same track on the final "stereo" output. The mastering engineer then has the option to mangle the sound again as it is equalized for final distribution. It is possible to create completely synthetic "original acoustics" completely unrelated to any real space. An example of this is "Q-Sound" as evidenced by Madonna's "Immaculate Collection" CD. None of this can be controlled from the consumer end. What you pay for is what you get. In your own playback space, further damage is done to the sound... The individual woofers, midrange drivers, and tweeters in the loudspeakers induce phase errors in the sound. Think of this as a ratio issue. In the recording hall, the few feet between instruments is often dwarfed by the distances from the instruments to the microphones. In the home, however, the inches of distance between the plane of the drivers is a relatively larger proportion of the distance between the speakers and the listening position. Further, the echoes of the listening room are always superimposed over the recorded sound of the original acoustic. Because these patterns are completely unrelated, any listening room echo is always destructive to the recorded sound of the original acoustic (if any is there). It seems to be the consensus of speaker designers (and voters on this forum) that having the drivers of the loudspeaker time aligned does less damage to the program material. Although it is probably so, the plethora of other variables that I've just mentioned ensure that even perfectly time-aligned loudspeaker pairs are no guarantee of an accurate window on the original sound. You and I are in agreement on a number of things. However, where we differ you do not have a real world analog of what you're talking about. Your position of sound sources being different than speakers is erroneous in that anything that produces a sound is a sound source; so speakers are a sound source and an orchestra or any group of instruments is analogous to speakers. Let me use your assessment in making my point (which you almost do in your post); assuming that you have two microphones positioned along the same plane in front a group of "sound sources" as you put them. Those microphones pic up and record the accumulated sound that comes to them without altering that sound. That sound is recorded raw and played back on your two-channel system with your "sound sources" otherwise known as speakers recreating exactly what was picked up by the microphones from the "sound sources" in front of them with no other aspects of physics altering your perception. Now, my understanding of the question is, if I were to move one of those "sound sources" i.e. speakers back (or forward; take your pick) would it affect the sound. If you were to measure the sound waves produced from your "sound sources" you would notice a difference in the phase pending that what you were listening to was exactly identical from each "sound source" to begin with (which, with a stereo recording you would not be). However, just because you can measure it does not always mean that you can hear it. My position is that depending on the distance you are moving one "sound source" relative to the stationary "sound source", pending that both "sound sources" are reproducing the exact same sound wave at the exact same time (which they wouldn't do in a stereo recording), you are not likely to notice a difference. Why? Because I feel with using A440 as my example, that you would have to move one of your "sound sources" 16 inches fore or aft of the stationary "sound source" to create an anti-wave (180 degrees out of phase) pending that you were listening to a mono track of a 440 Hz sine wave and disregard all other aspects of physics in your room or of the sound wave. It's those physics that did not allow the one speaker at Emofest to cancel out the other speaker that was running 180 degrees out of phase of it. While I appreciate where you are coming from in wanting to hear at your ears exactly what the mics were picking up at the point of recording, physics will prevent you from ever achieving that short of headphones (and even that won't do it completely). And, depending on what you're listening to and how much your "sound sources" are all kaddy-wompus you will likely not notice much to any improvement in altering your "sound sources" initiation points. Again, I will change my opinion if you or anyone else can give me a real world analog of time alignment of "sound sources".
|
|
|
Post by Boomzilla on Jan 6, 2013 20:54:43 GMT -5
OK - We can agree to some extent. I've owned "time aligned" loudspeakers (Magneplanars, Dahlquist DQ-10s, and Thiels). I've also owned loudspeakers that gave little or no consideration to time alignment (any box speaker, and my current La Scalas, for instance). BOTH TYPES created a noticeable and palpable center image in my listening rooms. With both types, I could easily tell when the speakers were wired out of phase with each other. Yes, they still produced sound even when wired out of phase, but no clearly defined center image.
Yes, the La Scalas have drivers staggered by 16 inches or more. Do I detect cancellation at the crossover points? No.
And now the critical question: Could I detect the difference between a time-aligned loudspeaker and a box loudspeaker with a blindfold on (assuming identical frequency response curves)? No, I don't think I could. Posters on this and other forums claim that the difference is significant. I don't really know.
I can say that the La Scalas, in my living room, image better than my time-aligned Thiel CS1.5 and CS3.6 loudspeakers. I think that the difference, though, has more to do with dispersion than time alignment.
Would the La Scalas image better if time aligned? It's an easy test to do. I'll tell you when it's done what I heard.
Cheers - Boomzilla
|
|
stiehl11
Emo VIPs
Give me available light!
Posts: 7,261
|
Post by stiehl11 on Jan 6, 2013 21:09:59 GMT -5
My speakers are phase aligned. As you and I both agree with DYohn that phase and time are synonomous, my box speakers can be considered similar to the speakers you listed in terms of "time aligned"... even though they are "box speakers" (it's also where they get their namesake from). I have owned the speakers in my signature for about 8 months now and prior to that I had their little brother since 2000. I can say that the biggest improvement to my soundstage was switching from AVR amplification to separate amps. As those that know me well on the forum, I constantly (and almost annoyingly some might add) promote my speakers because I can't find speakers that not only give me the soundstage but the frequency response that my speakers do without spending 2~3 times as much money. Could it be that is because they are phase/time aligned? Considering that the company that makes my speakers holds the patent for the soft dome tweeter, and how ubiquitous soft dome tweeters are; you would think that other manufacturers would have got on their bandwagon by now (they've had the technology in my speakers since the 80's) using their cones/drivers and crossovers.
|
|
|
Post by Boomzilla on Jan 6, 2013 21:50:52 GMT -5
Thanks - I was referring to "box speakers" as those having all drivers mounted on a common surface (not parallel to the listening position) but with voice coils in differing planes from that surface.
|
|
|
Post by Boomzilla on Jan 8, 2013 5:36:03 GMT -5
|
|
|
Post by Jim on Jan 8, 2013 11:20:46 GMT -5
Interesting paper. Thanks for posting that.
I'll have to re-read it when my brain is fully functioning.
|
|
|
Post by Chuck Elliot on Jan 8, 2013 11:44:42 GMT -5
Broom,
Very interesting, thanks for posting!
|
|
|
Post by Boomzilla on Jan 8, 2013 14:14:50 GMT -5
I used to get the Klipsch newsletter "The Dope From Hope." This was one of the papers posted. I didn't keep my copies, but now wish I had.
|
|
|
Post by Chuck Elliot on Jan 8, 2013 14:22:00 GMT -5
|
|