KeithL
Administrator
Posts: 10,274
|
Post by KeithL on Jan 31, 2022 12:31:51 GMT -5
It occurs to me that you want to put the microphone INSIDE the head... Then add simulated ear canals and, more importantly, outer ear structures. (You really want two microphones, one inside each ear canal, at the correct depth.)
Chasing some ideas on interaural crosstalk cancellation and the effect of the head-related transfer function on imaging, and maybe even Dirac corrections. But wait, wait ... what shall we name it? And no, it can't be Yorick because that's for skulls <button disabled="" class="c-attachment-insert--linked o-btn--sm">Attachment Deleted</button>
|
|
|
Post by marcl on Jan 31, 2022 14:56:40 GMT -5
keithl ttocs fbczar Yeah using the term HRTF was maybe an overstatement. What I think I meant was using the physical presence of a relatively dense head shape to block sound from the opposite side of the mic, much like the head blocks it. And maybe it simulated the effect of the sound that does wrap around the head somewhat from the other side. As I'm thinking through what I'm trying to accomplish, I'm also trying to figure out how to express it. I'm trying to use the head in a very basic way to block sound from the opposite side of the room. And that's why - at least to a first degree of testing a simple case - I'm not worried about mics IN the head, binaural mics, or pinnae. In my first test I measured the left and right channels with the mic vertical in three positions around the head of "Agent 13": left ear, nose, right ear. All in the same horizontal plane. I got pretty consistent results for each ear and each channel. Not much difference below 800Hz. Above 800Hz, each channel measures a little louder at it's corresponding ear, a little lower at the opposite ear, and somewhere in between at the nose. So, to be expected that a channel would measure lower on the opposite side at the mid-high frequencies. What is a little less expected is that the near side measurement is higher than the nose. Simplest explanation would be a difference in level due to the nose being about 4" forward of the ear position. Another explanation would be that there is some reflection arriving at the nose position that is blocked at the near ear position, and that reflection partially cancels some higher frequencies at the nose. Or something like that. But since I observe a difference I can now move some absorbers around the room and see if I can make it go away. If I can, I prove that a reflection is causing the difference. If I can't make it go away, I figure it's just a positional difference. Etc .... I'll see what I can observe, and then see if I can do something external to the head and mic in the room that changes what I observe. And if I can, then think about what that means ... and probably put that in place temporarily and see if I HEAR a difference. Etc. I'm also postulating that the Rooze arrangement allows some back wave to bounce around in front of the speakers. If so, I may be able to improve imaging by blocking those reflections. I did find one such issue a while back, tracked it down to a specific reflection, added an absorber, and made a definitive improvement in imaging that could also be confirmed on the ETC measurement.
|
|
KeithL
Administrator
Posts: 10,274
|
Post by KeithL on Jan 31, 2022 15:34:44 GMT -5
I'm not sure if I can agree with your logic... The real catch is that we can reasonably expect the "near ear" to hear pretty much what the microphone would pick up without the head being there... And, for a given channel, we don't have any option that allows to affect what reaches the opposite ear from that speaker. (These interactions are a lot more complex than just summing it all together so it comes out right...) And don't forget that you're going to have reflections off the head reaching the microphone from "the back" - and potentially cancelling with the direct information...
And those will vary depending on the acoustic properties of the head and its distance from the microphone...
Let's assume we're talking about sound from the right front speaker... Without the head the microphone is going to be measuring a combination of direct and reflected sound... with most of the reflections being rather later since they have to reach the room wall and return. With the head, and the microphone to the right side, the microphone will be receiving the direct sound, and the reflected sound, but SOME of the reflected sound from the wall will have to go around the head. This may provide a pretty good approximation of what the RIGHT EAR would hear. But it's not going to be an accurate representation of what the LEFT ear hears. And, when you're measuring the RIGHT FRONT SPEAKER, you want to measure what BOTH ears will hear... not just the right ear.
If we were doing true binaural we would have the option of playing the proper share of the content from the right front channel in the left ear. But we don't have that sort of control with speakers (and you're still talking about an actual cross-feed of audio between channels).
And you would probably also need active processing like delays and frequency response shaping.
IF ANYTHING, and considering Dirac, I might consider taking the first measurement WITHOUT THE HEAD....
Then perhaps include the head for measurements to the left and right. This would allow you to "get the head in the picture" without interfering with the initial left/right symmetry determination.
(Remember that Dirac is going to use that first measurement to mostly determine left/right symmetry, and you DO NOT want to mess with that.)
keithl ttocs fbczar Yeah using the term HRTF was maybe an overstatement. What I think I meant was using the physical presence of a relatively dense head shape to block sound from the opposite side of the mic, much like the head blocks it. And maybe it simulated the effect of the sound that does wrap around the head somewhat from the other side. As I'm thinking through what I'm trying to accomplish, I'm also trying to figure out how to express it. I'm trying to use the head in a very basic way to block sound from the opposite side of the room. And that's why - at least to a first degree of testing a simple case - I'm not worried about mics IN the head, binaural mics, or pinnae. In my first test I measured the left and right channels with the mic vertical in three positions around the head of "Agent 13": left ear, nose, right ear. All in the same horizontal plane. I got pretty consistent results for each ear and each channel. Not much difference below 800Hz. Above 800Hz, each channel measures a little louder at it's corresponding ear, a little lower at the opposite ear, and somewhere in between at the nose. So, to be expected that a channel would measure lower on the opposite side at the mid-high frequencies. What is a little less expected is that the near side measurement is higher than the nose. Simplest explanation would be a difference in level due to the nose being about 4" forward of the ear position. Another explanation would be that there is some reflection arriving at the nose position that is blocked at the near ear position, and that reflection partially cancels some higher frequencies at the nose. Or something like that. But since I observe a difference I can now move some absorbers around the room and see if I can make it go away. If I can, I prove that a reflection is causing the difference. If I can't make it go away, I figure it's just a positional difference. Etc .... I'll see what I can observe, and then see if I can do something external to the head and mic in the room that changes what I observe. And if I can, then think about what that means ... and probably put that in place temporarily and see if I HEAR a difference. Etc. I'm also postulating that the Rooze arrangement allows some back wave to bounce around in front of the speakers. If so, I may be able to improve imaging by blocking those reflections. I did find one such issue a while back, tracked it down to a specific reflection, added an absorber, and made a definitive improvement in imaging that could also be confirmed on the ETC measurement. <button disabled="" class="c-attachment-insert--linked o-btn--sm">Attachment Deleted</button>
|
|
|
Post by marcl on Feb 1, 2022 11:20:34 GMT -5
I'm not sure if I can agree with your logic... The real catch is that we can reasonably expect the "near ear" to hear pretty much what the microphone would pick up without the head being there... And, for a given channel, we don't have any option that allows to affect what reaches the opposite ear from that speaker. (These interactions are a lot more complex than just summing it all together so it comes out right...) And don't forget that you're going to have reflections off the head reaching the microphone from "the back" - and potentially cancelling with the direct information...
And those will vary depending on the acoustic properties of the head and its distance from the microphone...
Let's assume we're talking about sound from the right front speaker... Without the head the microphone is going to be measuring a combination of direct and reflected sound... with most of the reflections being rather later since they have to reach the room wall and return. With the head, and the microphone to the right side, the microphone will be receiving the direct sound, and the reflected sound, but SOME of the reflected sound from the wall will have to go around the head. This may provide a pretty good approximation of what the RIGHT EAR would hear. But it's not going to be an accurate representation of what the LEFT ear hears. And, when you're measuring the RIGHT FRONT SPEAKER, you want to measure what BOTH ears will hear... not just the right ear.
If we were doing true binaural we would have the option of playing the proper share of the content from the right front channel in the left ear. But we don't have that sort of control with speakers (and you're still talking about an actual cross-feed of audio between channels).
And you would probably also need active processing like delays and frequency response shaping.
IF ANYTHING, and considering Dirac, I might consider taking the first measurement WITHOUT THE HEAD....
Then perhaps include the head for measurements to the left and right. This would allow you to "get the head in the picture" without interfering with the initial left/right symmetry determination.
(Remember that Dirac is going to use that first measurement to mostly determine left/right symmetry, and you DO NOT want to mess with that.)
keithl ttocs fbczar Yeah using the term HRTF was maybe an overstatement. What I think I meant was using the physical presence of a relatively dense head shape to block sound from the opposite side of the mic, much like the head blocks it. And maybe it simulated the effect of the sound that does wrap around the head somewhat from the other side. As I'm thinking through what I'm trying to accomplish, I'm also trying to figure out how to express it. I'm trying to use the head in a very basic way to block sound from the opposite side of the room. And that's why - at least to a first degree of testing a simple case - I'm not worried about mics IN the head, binaural mics, or pinnae. In my first test I measured the left and right channels with the mic vertical in three positions around the head of "Agent 13": left ear, nose, right ear. All in the same horizontal plane. I got pretty consistent results for each ear and each channel. Not much difference below 800Hz. Above 800Hz, each channel measures a little louder at it's corresponding ear, a little lower at the opposite ear, and somewhere in between at the nose. So, to be expected that a channel would measure lower on the opposite side at the mid-high frequencies. What is a little less expected is that the near side measurement is higher than the nose. Simplest explanation would be a difference in level due to the nose being about 4" forward of the ear position. Another explanation would be that there is some reflection arriving at the nose position that is blocked at the near ear position, and that reflection partially cancels some higher frequencies at the nose. Or something like that. But since I observe a difference I can now move some absorbers around the room and see if I can make it go away. If I can, I prove that a reflection is causing the difference. If I can't make it go away, I figure it's just a positional difference. Etc .... I'll see what I can observe, and then see if I can do something external to the head and mic in the room that changes what I observe. And if I can, then think about what that means ... and probably put that in place temporarily and see if I HEAR a difference. Etc. I'm also postulating that the Rooze arrangement allows some back wave to bounce around in front of the speakers. If so, I may be able to improve imaging by blocking those reflections. I did find one such issue a while back, tracked it down to a specific reflection, added an absorber, and made a definitive improvement in imaging that could also be confirmed on the ETC measurement. <button disabled="" class="c-attachment-insert--linked o-btn--sm">Attachment Deleted</button> Interesting thoughts Keith, thank you! I definitely will rerun the tests with Agent 13 in place and then removing it to see how it affects each position. Of course, room reflections are complex. My idea with Agent 13 is to hopefully reduce the complexity in a particular way and then see what happens. Sure, DSP would be an impractical solution to anything I find. I'm hoping that if I find anything, there can be a "mechanical" solution that improves the listening experience. Most likely this would involve permanent placement of absorbers to block specific reflections. It could also involve modification of how I do Dirac measurements, possibly using Agent 13 in place for some or all measurements. My Rooze arrangement presents a challenge to Dirac, and over several months last year I found that temporarily adding absorbers to block the back wave from the L/R speakers resulted in more accurate Dirac measurements and results. The absorbers are in place for the MLP measurement so Dirac gets the impulse and delay corrections right. It works well. So ... some Agent 13 measurements today .... then a full Dirac recal with DL 3.2.2 and FW 2.5 tomorrow when I have a quiet house. “Learn the rules well, so you can break them properly.” Dalai Lama
|
|
|
Post by marcl on Feb 1, 2022 14:28:59 GMT -5
I just did 30 measurements. I measured left and right ear, nose and back of head, with and without head. All of the measurements were consistent with measurements I did previously, as well as hypotheses on why and how. VERY brief observations: - For a given position, there is a difference between the measurement with and without the head. My hypotheses as to why the differences occur are confirmed
- ETC measurement confirms that the head is blocking a reflection at 8ms for some measurements
- 8ms reflection confirmed to be coming from mostly rear wave reflection from opposite side wall
- Absorbers in-line with speakers removes 8ms reflection
- Back of the leather couch causes a very clear reflection at about 1ms for the ear measurements (covering couch removes reflection)
Much more can be observed from within the data, so I have to look at it some more. One thing that this much of it confirms is I can get a better impulse response with these 2x4ft absorbers in line with the speakers, and so I will put them there for Dirac calibration and then compare with and without after calibration. Remember, the Rooze configuration has the L/R speakers at 45 degrees to the side wall and bouncing the front wave to the MLP ... not applicable to most "normal" people's setup Lots more measurement work for Agent 13 another time! Attachment Deleted
|
|
|
Post by marcl on Mar 30, 2022 7:17:39 GMT -5
Windows 11 and REW. You need to install the latest ASIO4ALL for it to work in REW on a Windows 11 PC. The JAVA driver works for measuring only stereo, but if you want to measure 7.1 using an HDMI output and ASIO4ALL driver you need the Feb 16 version 2.15 www.asio4all.org/Also, I found (and maybe this is just my new Dell XPS 15 9510) I had to set the HDMI input on the XMC-2 to V1.4 in order for the HDMI output of the USB-C hub to work.
|
|
|
Post by marcl on Jun 11, 2022 7:50:50 GMT -5
Subtitle: Psycho? or Acoustics?This post is about some experimenting I did this week based on last week's Dirac calibration that sounds amazingly better than any I've had in the past. I also made a small change to room treatment (replacing an absorber with diffuser) which contributed to better imaging and soundstage. The experiment was simple: I changed my rear surround B1+ speakers from Small, crossing at 100Hz, to Large. I did this because - as they are placed on wall brackets and with the resonances of my room - they appear to respond down to 30Hz as measured by Dirac. When I bump the target curve flat they measure in-room down to 30Hz before rolling off. Now I don't have delusions of chest thumping dynamics from them, but I figured they meet the Dolby spec for a surround speaker so I'd give it a try. The added consequence of this change is that the now-Large rear surrounds now join my Large L/R fronts doing Bass Management for the other small speakers in the room. And the consequences of THIS are the subject of this post. Focusing on my Small center and surrounds, here is their response with No Smoothing: I'm including the No Smoothing plot just for completeness. We never really use No Smoothing because there's just too much distraction. But you can see the problem right away ... YIKES! look at those deep spikey cancelations at 38 and 62Hz! Can we actually HEAR that? Well, this is the question of the day. For reference, here are the definitions of Variable, Psychoacoustic, and ERB Smoothing from REW. www.roomeqwizard.com/help/help_en-GB/html/graph.htmlSimply put, Variable is best for equalization and evaluating low frequency resonances; Psychoacoustic is the REW version of smoothing to simulate what we hear; ERB (Equivalent Rectangular Bandwidth) is the method referenced in academic literature as simulating the way our hearing system works. So what happens if I apply these smoothing algorithms to this data? Variable Psychoacoustic ERB With only 1/48 Octave smoothing below 100Hz, the Variable looks predictably a lot like the No Smoothing plot ... still scary. But Psychoacoustic looks great! I bet any of us would be happy with a response like that. And with ERB, well the bass is so flat ... those spikeys just are not an issue at all, are they? But are they? I looked around whatever references I could find and got some conflicting answers. Amir at Audio Science Review did a video on frequency response and his conclusion most definitely does NOT agree that we can ignore those spikes below 100Hz. He suggests these dips on the order of 10Hz or more wide would definitely be audible. www.youtube.com/watch?v=TwGd0aMn1wEAcademic literature supports the use of ERB, but does not specifically get into audibility or perception with respect to music reproduction. At least not that I could find. Toole refers to ERB twice in his book and appears to agree with its relevance above 100Hz, but more in the sense that the topic he's discussing is the audibility of comb filtering which he concludes is not very audible above 100Hz, especially if delayed in reflections. I'll summarize what I believe to be the relevant conclusions from his Psychoacoustics chapter: peak resonances are more audible than cancellations; narrow resonances are more audible with broadband noise or spectrally complex music (orchestral) than jazz or pop music; narrow resonances that are visible in a steady state measurement may not be audible in music because the resonance may never reach steady state. BTW ... another consequence of this configuration is - for reasons I can't explain - the +4db Bass Management boost for small speaker bass is gone! Bottom line .... I'm listening with the system configured this way now. So far everything sounds amazingly good! I can't say if it sound better than with the rear surrounds set to small ... but there most definitely is NOT anything that suggests it sounds obviously worse ... so far. Note: I'll update this post if I find other enlightenments
|
|
|
Post by audiobill on Jun 11, 2022 10:50:32 GMT -5
You are using a room/ target curve for actually listening, no?
Bill
|
|
|
Post by marcl on Jun 11, 2022 11:01:58 GMT -5
You are using a room/ target curve for actually listening, no? Bill All of my targets in Dirac seek to achieve a flat response and that is my goal for listening. Now sometimes after measuring the room response I tweak the Dirac curve if I see a very broad peak or dip. But the goal is to be flat at the MLP. For example ... this is the response at the MLP with Psychoacoustic Smoothing. LFE is subs-only. Bass Management goes to the fronts and rear surrounds only.
|
|
|
Post by audiobill on Jun 11, 2022 11:06:48 GMT -5
Interesting.....
|
|
|
Post by marcl on Jun 18, 2022 1:40:17 GMT -5
I got a pair of Sennheiser HD280Pro headphones, mostly so I could listen to podcasts and YouTubes at my desk without disturbing my wife. But then I had an idea ... Call me crazy .... but I thought what if I use the signal generator in REW and plot what I can hear in each ear through the headphones? I did NOT expect to determine in any absolute, quantitative sense my actual hearing response. I don't assume the Sennheisers are perfectly flat and my process - though consistent - could hardly be as precise as an audiologist would do. What I DID expect to accomplish was to determine first, what is the response of my two ears relative to each other. Next I was on the lookout for any extreme nonlinearity ... like a severe loss in some narrow frequency range. So with regard to my reasonable expectations, I think I was successful. Here's the data ... X in Hz, Y in db ... What I see is a uniform response with no extreme dips. That's good! What I also see is that my right ear has slightly lower response in both high and low frequencies than the left. I expected this because I observe it sometimes in my listening, and I know the right canal is a bit constricted compared to the left. The shape of the response is not totally unexpected given my 67 years. But I don't necessarily believe that it's as bad as it looks based on this process. p.s. the process ... First ... wait until the lawnmower guys were gone! I started with a 320Hz tone and turned the XMC-2 volume down until the tone just disappeared. I went back and forth randomly turning the tone on and off with my eyes shut, to be sure that when I stopped I could definitively hear or not hear the faint tone. Then I worked down the scale to 40Hz and up the scale until I heard nothing with max volume on the high end. I wrote down the volume control value at each step, then inverted the data to create the chart. And yes, good process would suggest I reverse the headphones and do the test again to see if it's repeatable. I may do that. Other than the lawnmowers, my biggest challenge was hearing my own heartbeat!
|
|
|
Post by marcl on Aug 14, 2022 7:28:15 GMT -5
Here is a most exciting post for a thread called Measuring! AES Melbourne hosted John Mulcahy - creator of Room EQ Wizard - for their August Zoom meeting. Here's a link to the meeting report which includes links to the Zoom recording on YouTube and a PDF download of his slides: www.aesmelbourne.org.au/aug2022-mtg-report/A couple comments after watching the video ... John's credentials and technical chops are impressive. He gives a great overview of REW capabilities, a couple of which I was not aware of. When asked about products that implement EQ well ... he said "Dirac does a very good job". For those VERY technical folks ( KeithL ) he goes into some explanation of the math and analysis challenges and implementation decisions for several features. If you pay CLOSE attention, you can play at 1.5X speed and understand every word For your convenience:
|
|
|
Post by marcl on May 21, 2023 6:51:19 GMT -5
Regarding setting levels of all speakers in a 7.1.4 system .... an Hypothesis: 1. The levels of all the speakers on the left side of the room should be the same at the left ear, and the levels of all the speakers on the right side of the room should be the same at the right ear.
2. Center level should be the same at both ears. 3. The levels at the opposite ear should be lower, and they don’t matter. That’s Inter-Aural Level Difference, and that’s to be expected. 4. Levels should be measured with (an approximation of) HRTF, and microphones in the respective ears … NOT with a single mic pointing up in unobstructed free space. Lacking the funds for a Neumann head ... here's a cork head with silicone ears and the Sonicpresence SP15C binaural microphones ... one in each ear ... Initial measurements support the hypothesis. Initial listening tests are positive. More comprehensive testing today. Thoughts? Questions? Suggestions?
|
|
|
Post by leonski on May 26, 2023 15:55:41 GMT -5
Some people just LIKE bass that "punches you in the gut"...
I tend to listen to classic rock and modern symphonic metal a lot of the time... Some concert venues sound quite good... but others clearly achieve one major goal... BEING LOUD !.
And, from my experience, when it comes to rock or heavy metal, most clubs sound pretty bad...
(And, as a result, I have no desire whatsoever to have my home system "reproduce the live listening experience" - because it usually isn't worth reproducing.)
As I've said before... Many audiophiles do not actually want their home system to reproduce the original performance or what the recording engineer actually heard in the studio... What they really want is for their home system to sound like what they imagine the live performance, or the sound in the studio, should have sounded like... (And, quite often, that's not much like what is present in the actual recording at all.)
Mental-floss! I spent three days at Newport Jazz Festival listening to music ... 6 hours a day! It's an outdoor venue at Ft Adams State Park, Newport, RI. When I listen to live music I notice when it sometimes sounds good and sometimes not so good, and I try to understand why. I want to know, well just because I want to know ... but also to correlate what live music sounds like compared to how it sounds in my room at home (and these days it almost always sounds better at home!). Now, I'm a drummer so I have heard what live music sounds like from within a band, and often without amplification. I know what instruments sound like. In today's world when we listen to live music we're listening to music played through speakers mixed by a sound engineer. At home we listen to recorded music (that was mixed by a sound engineer listening to speakers in a room) that is now being played through speakers in our room. At least at a live concert we and the sound engineer are listening to the same speakers in the same venue. At live concerts sometimes I do a measurement with my phone, either just SPL level or sometimes with an FFT app that shows the frequency spectrum. Just a quick look to quantify and correlate ... what do I hear and what does the measurement show. So at Newport I did a few measurements, and even talked to a sound engineer once to gain a better understanding of why some bands sounded totally different from others on the same stage with the same sound equipment. I observed three general scenarios: very good and well-balanced sound; sound with one overbearing instrument that masked others, typically a bass drum or acoustic/electric bass; sound with all of the bass boosted together into an indistinct boom, usually also at very high overall SPL. Since at a live outdoor concert you also hear the sound check for each band, some additional information is available to understand why it sounds the way it does. I concluded two things: what I heard was what the sound engineer heard; what I heard - for better or worse - was what he intended for the audience to hear. Now for some examples .... Arturo O'Farrill's Latin jazz quintet opened the weekend on Friday. Acoustic quintet with piano, double bass, drum set, percussion and trumpet. One would expect a flat frequency response with all instruments in balance, right? Well during the sound check as the drummer hit each drum and the sound guy adjusted levels, I observed a strange thing ... when he got to the bass drum, the sound guy turned up the mic level until it fed back and then lowered it just to the threshold. Most people tune their bass drum between 40 and 60Hz and have some sort of muffler to limit ringing (I like mine on the low side, muffled) but some jazz drummers like a higher pitch and a bit more resonance. This drummer had his bass drum tuned to 79Hz and a bit resonant. And so for the entire hour set, every time he hit the bass drum there was a loud BOOM at 79Hz that lasted a couple seconds. The double bass was hard to hear to begin with, but when the bass drum played the bass was inaudible. I confirmed that this response was the same where I was seated 100ft from the stage, and at several locations walking around, including right next to the mix board. Here's the evidence: View AttachmentSomeone might say this is "what the artist intended". Not likely. As a jazz musician I'll say it makes no sense. But further, the stage mix is completely different and I was told by another sound engineer that they set up the system with a bass cancellation node at the stage and a "power node" out in the audience. The band hears the monitor mix, not the house mix. I observed the bass drum-only anomaly with one other band, this time at 67Hz. View AttachmentIn this case as with the first, I heard the sound guy intentionally turn the bass drum up during the sound check. I'll mention here that these examples were at the large stage with the stone wall of Ft Adams behind the stage. There is a smaller stage inside the fort, surrounded by stone walls that are hundreds of feet from the listening area. I did not observe the bass drum boom anomaly at the interior stage. Here's an example of sound that clearly had a bass house curve applied. All instruments had their bass boosted at least 10db above the rest of the mix. And let's not talk about our perception of bass below reference levels a la Fletcher-Munson ... these are sustained average levels at 90db with peaks to 105db! View AttachmentHere are two more examples of house curves applied to the bass. Not as extreme, but to my ear very out of balance and definitely with the bass masking some of the instruments. One of these was at the interior stage where, earlier in the day, I had heard three bands with excellent sound ... and I complimented the engineer. When I returned and heard the boom, there was a different guy mixing. View AttachmentView AttachmentWhen listening to these very loud mixes I wore ear plugs that attenuate about 10db without changing the frequency response. But in a couple cases the bass boom was so distracting I left and went to the other stage. Bottom line .... this is just information and observation; and, it illustrates my ongoing (and unfortunately increasing) dissatisfaction with listening to live music. Between extreme house curves and loudness wars, true high fidelity in music listening is becoming ever more elusive. Like it LOUD? I was at the Swing Auditorium in San Bernardino CA back in the early 70s when Emerson Lake & Palmer came to town and did a show......in 4 channel 'surround sound'...... I think I may have permanently damaged my hearing...... Too bad a plane crash into the building was the reason it HAD to be demolished.....
|
|
|
Post by marcl on Jan 29, 2024 17:46:18 GMT -5
I came across a Stereophile reprint of a John Atkinson AES paper from 1997: "LOUDSPEAKERS: WHAT MEASUREMENTS CAN TELL US ___ AND WHAT THEY CAN"T TELL US"
48 pages! I downloaded the original from AES ... and I just read the ...
OVERALL CONCLUSIONS
While each measurement of a specific area of loudspeaker performance gives important information regarding possible sound behavior, it emerges that there is no direct mapping between any specific area of measured performance and any specific subjective attribute. As a result:
- Any sound quality attribute always depends on more than one measurement. - No one measurement tells the whole story about a speaker's sound quality. - Measuring the performance of a loudspeaker involves subjective choices. - All measurements tell lies. - Most important, while measurements can tell you how a loudspeaker sounds, they can't tell you how good it is. If you carefully look at a complete set of measurements, you can actually work out a reasonably accurate prediction of how a loudspeaker will sound. However, the measured performance will not tell you if it's a good speaker or a great speaker, or if it's a good speaker or a rather boring-sounding speaker. To assess quality, the educated ear is still the only reliable judge.
And no matter how good any one measurement, if the beginning of the third movement of Beethoven's Fifth Symphony, where the composer introduces the trombones for the first time, or Jimi Hendrix's hammered-on tremolo at the start of "Voodoo Chile" on Electric Ladyland doesn't send shivers down your spine, the loudspeaker is still doing something, somewhere, wrong.
|
|
|
Post by marcl on Feb 29, 2024 13:23:01 GMT -5
I wanted to share this article, ands this seemed to be the right place because the crux of Harman research - and the work of Toole, Olive, Welti, et all - has been measurement. This is a report written by someone from Crutchfield who visited Harman and interviewed Sean Olive. I really like following Sean Olive ... even his FBook posts which are personal and delve into areas unrelated to audio, are very interesting. So ... interesting article ... a good bit of talk about Spatial Audio the proverbial Harman Target Curves ... sure makes you want to visit the Harman Experience Center! Oh and ... be sure to note the discussion on how to pick a Harman curve for your Dirac target! www.crutchfield.com/learn/crutchfield-visits-harman.html?fbclid=IwAR3uN03jcxXQ8tlajb5t7ONKa151EgazleqXPhc0hvXMNKJl6QC7_WWf9wE
|
|
ttocs
Global Moderator
I always have a wonderful time, wherever I am, whomever I'm with. (Elwood P Dowd)
Posts: 8,168
|
Post by ttocs on Feb 29, 2024 13:56:19 GMT -5
I wanted to share this article, ands this seemed to be the right place because the crux of Harman research - and the work of Toole, Olive, Welti, et all - has been measurement. This is a report written by someone from Crutchfield who visited Harman and interviewed Sean Olive. I really like following Sean Olive ... even his FBook posts which are personal and delve into areas unrelated to audio, are very interesting. So ... interesting article ... a good bit of talk about Spatial Audio the proverbial Harman Target Curves ... sure makes you want to visit the Harman Experience Center! Oh and ... be sure to note the discussion on how to pick a Harman curve for your Dirac target! ;) www.crutchfield.com/learn/crutchfield-visits-harman.html?fbclid=IwAR3uN03jcxXQ8tlajb5t7ONKa151EgazleqXPhc0hvXMNKJl6QC7_WWf9wEShouldn't it really be called the "Harman Headphone Target Curve"? Wouldn't this make the case more clearly that the curve is aimed at headphones?
|
|
KeithL
Administrator
Posts: 10,274
|
Post by KeithL on Feb 29, 2024 16:46:06 GMT -5
Not really... If you read the article it says: “We found that on average, the [Harman headphone curve] closely matched the preferred in-room frequency response of a loudspeaker,” And, from that, it's obvious that they first had to determine "the preferred in-room response of a loudspeaker". And we shall hope or assume that room correction software is going to be using that "preferred in-room frequency response of a loudspeaker" as its target. I wanted to share this article, ands this seemed to be the right place because the crux of Harman research - and the work of Toole, Olive, Welti, et all - has been measurement. This is a report written by someone from Crutchfield who visited Harman and interviewed Sean Olive. I really like following Sean Olive ... even his FBook posts which are personal and delve into areas unrelated to audio, are very interesting. So ... interesting article ... a good bit of talk about Spatial Audio the proverbial Harman Target Curves ... sure makes you want to visit the Harman Experience Center! Oh and ... be sure to note the discussion on how to pick a Harman curve for your Dirac target! www.crutchfield.com/learn/crutchfield-visits-harman.html?fbclid=IwAR3uN03jcxXQ8tlajb5t7ONKa151EgazleqXPhc0hvXMNKJl6QC7_WWf9wEShouldn't it really be called the "Harman Headphone Target Curve"? Wouldn't this make the case more clearly that the curve is aimed at headphones?
|
|
|
Post by marcl on Mar 31, 2024 9:18:02 GMT -5
Further to my musings with the StudioSix Digital iTestMic2 USBC ... for use with Audio Tools for iPhone ... I connected the iTestMic2 to my PC and tried using it with REW without a calibration file. It was surprisingly close to the UMIK-1. So I applied the UMIK-1 calibration file to the iTestMic2 and it was even closer. So I tweaked the UMIK calibration file a bit and after about 5 iterations eyeballin' the differences above 6KHz, I think that's pretty darn close! My left front channel ... UMIK-1 in blue, iTestMic2 in green.
|
|
|
Post by marcl on May 4, 2024 10:28:16 GMT -5
I guess this is a good place to post this. fbczar sent me this online hearing test and I went through it today. If you decide to try it, read down through all the instructions, and do the separate high and low range tests for each ear rather than the combined 250-8k test. It very nicely accumulates all the results onto this single chart as you go. hearingtest.online/Here are my results. Not surprising. Blue is left ear, red is right. Two years ago I did this test myself using the REW signal generator, dinking the volume on the processor 1db at a time until I could hear the tone.
|
|