Home Theater Forum and Systems banner

1 - 20 of 22 Posts

·
Registered
Joined
·
151 Posts
Discussion Starter · #1 ·
I use REW to measure my project studio environment. I sit in a big fat cushy office chair and was wondering if I should move it when I measure. I feel like it should be there but I know too from reading the help files that even I'm supposed to move out of the way when I measure, as a large object next to the mic can cause mid-range (?) miscalculations/exaggerations. When I measure with/without the chair, there is a decent difference in response between 400-1.5khz.

Should I move it out of the way, or measure with it there to properly emulate my actual listening environment. Its actually easier for me to measure with it there as well as I hate moving it lol.

Studio 120 JML.jpg

 

·
Registered
Joined
·
174 Posts
I would not move the chair. Like you indicated, it will be there when you are listening. Looks like a nice setup. Quite the 'man-cave'!
 

·
Registered
Joined
·
151 Posts
Discussion Starter · #3 ·
Thx for ur reply...thats what I was hoping...just wondering why I can't be close to the mic during measuring but the chair can. Lol...i usually tuck beneath the desk when I click "start measuring".

And regarding the man-cave statement...its funny, I havent worked on a single track since I got into room analysis (about 5-6 weeks now). I knew this was gonna happen lol.
 

·
Registered
Joined
·
237 Posts
As the chair is part of the room environment, I would leave it where it normally is. It would be interesting to take measurements with and without the chair to see how much difference it makes.
 

·
Registered
Joined
·
151 Posts
Discussion Starter · #5 ·
I have taken measurements with both....there were diff from 400-1.5khz, where there was a peak in one there was a null in the other and vice versa
 

·
Banned
Joined
·
516 Posts
How are you "taking measurements"?

And what measurements are you taking? From the description it seems that this is limited to "a" frequency response.

You have several issues going on here that require several steps to address.

First you are going to have modal issues in the space.
For this the frequency response and derived waterfall is adequate.

But you are dealing with a mixed environment of both a direct and modal response.

You are also dealing with the effects of the superposition of the direct and indirect specular energy wavelets that dominate abiove the modal region.

For these issues you will drive one speaker at a time (with the mic remaining in the same listening position) and generate an impulse response from which you will convolve the Envelope Time Curve (the ETC).

From this you can identify the various direct and indirect signals, and from this you can determine the arrival times, gain levels, and determine which are detrimental. For the detrimental signals, you can then determine their individual paths and points of boundary incidence within the space, be it the work surface, walls, ceiling, or chair, etc., from the ETC.

The interaction of these signals is what is causing the comb filtering in the frequency response (from which we can tell little - except that you have comb filtering!; especially if it was derived from both speakers being driven simultaneously!) and for which the ETC must be used.

Knowing the information regarding each signal path. You can then appropriately treat each causal signal in order to re-mediate the derivative issues that appear in a frequency response.


The chair will detrimentally contribute to the indirect signals. And know which specific reflections are due to this cause will allow you to investigate them further, and some may be re-mediated by one's presence siting in the chair.

But as the chair extends above the torso, and functions as a reflective surface in the near proximity to the ears, it is pretty safe to predict that is is a primary source of problems in the space. Thus you can either treat the chair (not likely) or instead use one appropriate to the process that does not actively contribute to problems with imaging, localization, tonality and intelligibility via its contribution to early reflections very near the ears.
 

·
Registered
Joined
·
151 Posts
Discussion Starter · #8 ·
Wow a very descriptive answer....thank you! And yes, the only measurements I took were freq. response. I didn't even check the waterfalls.

I'm not quite sure what this part of ur message means:

"For these issues you will drive one speaker at a time (with the mic remaining in the same listening position) and generate an impulse response from which you will convolve the Envelope Time Curve (the ETC)."

Perhaps I should just look in REW's help files but I've yet to even use the IR function in REW, let alone "convolve the ETC" from it. Is there a link I should check out to learn more? I'm also a little uncertain about when I should measure with one speaker at a time and with both speakers.

It actually did dawn on me that I am using a high-back chair and that the head portion would directly reflect signals "back" towards my ear. And lol treating the chair seems silly, though even a soft wooly blanket might help with some HF freq's yes? Also, me sitting in the chair would reflect audio waves in much diff directions than the empty and quite firm leather chair.

And finally:
"it is pretty safe to predict that it <the chair> is a primary source of problems in the space."

So I need to do a lot more testing other than freq. response to see this? As I mentioned, I've done a lot of freq response measurements with/without the chair and see no diff outside the 400-1.5khz range....but only using freq response. I guess its time to learn how to do these other REW tests!

Thx for ALL the responses!
 

·
Banned
Joined
·
516 Posts
A couple of general comments. The frequency response is really not very useful. I know, I know….everyone has seen this and worshiped it as if it were magic since they were small, but few can really tell you much about what it specifically ‘says’. For good reason. It doesn’t. It basically either tells you the magnitude versus frequency of the direct signal (sans summation – called superposition – of indirect signals), or, it is (more likely) indicative of the destructive interference patters created by the superposition of direct and indirect signals - often called comb filtering after the pattern it creates in the frequency response.

But unfortunately the frequency response provides absolutely no information about what/where the various superposed indirect signals might be. And to add insult to injury, you don’t need a measurement to reasonably predict that you have the superposition of direct and indirect energies in a small bounded acoustical space that result in the spatially distributed exaggeration (sum) and cancellation of energies at various frequencies and positions within the room – commonly referred to as comb filtering for the pattern it creates in a frequency response made at that particular location.


So we need to address this issue of the summation of direct with indirect energies in two steps.
The first is to deal with energies whose wavelengths are longer than the room dimensions are large. These are called modal issues, and they result from wavelengths longer than the room dimensions being reflected from the massive room boundaries in such a manner that they establish standing waves – rooms modes – manifested as regions of high and low pressures where these low resonant frequencies are emphasized (anti-nodes or peaks) and minimized (nodes or nulls). This region ranges primarily from about 0 Hz to ~250 Hz.

For this, with the mic positioned in the listening position where one’s head would be, we can drive both speakers and derive the frequency response. From this frequency response we can convolve the waterfall response that will better indicate the persistent resonant frequencies. In the waterfall you look for the predominant ridges that extend outward from the primary plot.

In order to resolve the modal issues, you have a variety of choices. Most common is the use of adjusting the speaker and listener position such that the peaks and nulls regions are minimized or avoided. To the degree that that cannot be done, you can use equalization to help reduce the modal peaks (it will not mitigate the modal nulls). And you can also use velocity based porous absorption, and/or pressure based tuned resonant absorbers to minimize the excess resonant energy.


Above ~250 Hz, we deal with a slightly different manifestation of the reflected energy. Here the wavelengths of sound are smaller than the room dimensions, and as a result are reflected in a focused pattern. This behavior is similar to a ray and is referred to a being specular, and where, generally speaking, the angle of incidence with a boundary is equal to the angle of reflectance.

In order to evaluate this behavior we need a tool capable of ‘seeing’ each specular source as a discrete entity. This tool is called the Envelope Time Curve, or ETC. And we shift our focus from the frequency domain to the time domain.

In order to do this, with the mic remaining in the listening position, we drive only one speaker at a time and we generate a frequency response. We then select and convolve the impulse response from that view, and from the impulse response we select and convolve the ETC response. We do this for each speaker source. In a stereo system this would mean for both left and right speakers.

This response shows the relative gain and arrival time of each specular energy wavelet. From this we can rather quickly determine the distance traveled and the direction traveled, and from this the precise point of incidence with a boundary.

And then, in conjunction with a selected acoustical response model, we are able to examine the ETC and determine the relative behaviors of the various direct and indirect energy sources. From this we can then determine which sources are detrimental to the total response, and treat the various sources appropriately in order to achieve the desired total response with absorption, reflection/redirection, or diffusion. Equalization will not work here.

It is this process that creates the desired psycho-acoustic response and also that resolves the problems that appear in the form of a comb filtering pattern in a frequency response.


...Also, it is unlikely that a “soft wooly blanket” will be sufficient to solve the problem with reflections from high chair back. But you are welcome to try, as the ETC will allow you to compare before and after treatment scenarios. The easiest solution may be to consider finding another chair more appropriate to the application.

Good luck.
 

·
Registered
Joined
·
151 Posts
Discussion Starter · #10 ·
Cool Im gonna try this.

One question...I calibrated my setup to 75db with BOTH speakers. Do I need to turn each speaker up to reach 75db individually when measuring?

THX!
 

·
Banned
Joined
·
516 Posts
For our purposes there is no need to calibrate for gain.

You would need to generate a test stimulus of high enough gain to allow for adequate signal to noise - but not so loud that you fry your tweeters. How is that for some latitude?:D

I would establish your gain levels - starting low at first, no sense in blowing anything up! - and increase them until you obtain a well defined response - typically in the ~85-90 dB SPL range. That should hopefully provide at least an ~60 dB S/N. And adequate to obtain sufficient detail of both the early arriving signals as well as any later arriving indirect signals.
 

·
Registered
Joined
·
151 Posts
Discussion Starter · #12 ·
Ok so I setup my mic in my listening position and ran response tests on each individual speaker. I clicked IR, ETC, and Window in the graph. This is how it looks. Please let me know if I'm displaying them correctly...I found info on how to display freq response and waterfalls but not IR.

IR L 78 090711.jpg

IR R 78 090711.jpg

I still haven't had much time to look into how to evaluate these graphs but I read something about seeing where the peaks are, using that data to measure and cut a piece of string, and then determine where those "wavelets" are bouncing off from using the string. I could be totally wrong but that's the closest Ive come so far in determining what to do with this data.

Any ideas/thoughts?

THANKS SAC!!!
 

·
Banned
Joined
·
516 Posts
May I ask a favor and request that you post the .mdat file... This will allow us to open and massage it the response as well and save abit of time.

That said, you would do well to window it a bit...:D

1.74s = 1.74s x 1130 ft/s = ~1966 feet of travel. :ponder:
Now I am sure you have a nice room, but I think (...just call it a 'feeling') that none of the reflections travel ~.4 miles....:bigsmile:

So, to start with, we might zoom in a bit so that we can better see what is happening while encompassing all of he behavior in our room. You can do this by accessing the "Limits" preferences panel in the upper right...

You might window the X axis time scale from about -2ms to about 40ms. As sound travels ~1.13 ft/ms, that would allow you to see signals arriving within a travel distance of ~45 feet, more than enough for most rooms ~20 feet or so deep. It can be windowed down from there as necessary to close in on any detail as necessary. Note that on the time scale that we want to zoom into what now appears to be about the first 1/2" of the response.

The Y axis gain scale can be set to display from about 0 to -50 dBfs to start, and can be adjusted thereafter as appropriate.
This should give us a pretty good overview of the space - without including reflections from down the street. We'll let them deal with their own treatment!:D

Ultimately, as your space is rather small, and many of the reflections are from surfaces close to the measurement mic (e.g: the chair), you may end up windowing the display quite a bit more with respect to time.

(BTW, pardon my attempts at light hearted humor. No offense is meant, and I certainly hope that nothing is interpreted in that manner! I am simply trying to have some fun in this process!)
 

·
Registered
Joined
·
151 Posts
Discussion Starter · #14 · (Edited)
Thanks for all the help. I had some trouble uploading the .mdats...it would just keep loading and then say "failed to upload....". Here's the jpegs though at the limit specifications u suggested (I hope lol). I will try the .mdats again in a few minutes or by the end of the day.

left ir 40ms.jpg

right ir 40ms.jpg

And no I don't find your humor to be offensive at all!! Im enjoying it, glad ur having fun, and REALLY appreciate your help!! :R:dumbcrazy:

EDIT*** I've tried the .mdat again and again and it always takes forever and then says "failed to upload....". Is there something I did wrong in the saving process? I have a pretty decent internet connection and they're below the 5mb file size limit....hmmmmm. Any ideas?
 

·
Banned
Joined
·
516 Posts



Before proceeding be SURE to have marked the mic capsule position in 3Space with a technique such as a plumb bob that remains in place until ALL treatment is over (just pull it out of the way and fasten it). And remember, if you move the speakers or mic, ALL of the measurements must be repeated and re-analyzed!

For Total time of flight (TOF) – in order to know the total distance traveled from test signal source (speaker) to mic, we need to know the TOF.

First a couple of caveats of which one must be aware when first approaching the process of interpreting the ETC in REW:

First let me express my two small shortcomings of REW; two small warts on an otherwise wonderfully powerful and capable package at one of a price!!!

The first concern revolves around the fact that in acoustics we use the dB SPL scale. I know of no equivalency between dB SPL references measured in dynes/cm^2 and ‘decibels relative to full scale’, commonly abbreviated dBfs, which measures decibel amplitude levels in digital systems such as pulse-code modulation which have a defined maximum available peak level and where 0 dBFS is assigned to the maximum possible digital level. Both are useful within their own appropriate spheres of influence. REW would benefit significantly from the ability to be able to selectively toggle between the two gain scales and to have one or the other to be selectively designated as the default based upon use.

The option to select a gain scale and to, by default assign a general reference to the SPL scale (which could then optionally be calibrated in the rare instance someone needed that) would be extremely useful for acoustic applications. For our purposes, we don’t need to know the absolute gain scale, as our purposes are defined in terms of relative levels. In other words, we only care about how loud or soft one signal is relative to another…

The other concern is that the default time assignment axis and direct signal view does not correspond to the actual arrival time of the source stimulus. In short, the arrival time displayed on the X axis time display does not correlate directly to the actual stimulus signal arrival time – the TOF.

If this capability were provided, (assuming one has performed the NECESSARY HW propagation/latency loopback in order to account for the latency overhead that is imposed by the hardware system in order to achieve accurate time display), one could then optionally translate the X axis values and assign T=0 for any point as desired, should they need such niche capability. This feature, while nice is a special purpose function, useful for niche processing purposes, such as for determining the time delta between different signal arrivals useful for time offset delay line settings between drivers or array elements for signal alignment purposes.

Please John!!! Would it be possible to make these three small adjustments that would greatly render REW much friendlier by default for acoustics use and add to the already significant abilities of REW for acoustics use?

(Also, it would be very useful to allow user defined settings defined in the “Limits” panel to be toggled as a default view.)

So, let’s take a look at one work-around the situation sufficient for us to make sense of the critical total Time of Flight (TOF) calculation.

So as far as determining the time, distance and path of travel for each specular signal, we have two choices, dependent primarily upon what method of analysis one is comfortable.

Initially I would suggest doing a couple of ‘string’ analyzes, if for no other purpose than visualization and familiarization with concept. Thereafter for general use after one is familiar with the concept, I would strongly suggest employing the blocking method, as it is relatively quick and easily done – and it can be generally done by just one person.





I have emailed you a comprehensive procedure for evaluating the reflections and identifying and precisely determine the paths and the location of boundary incidence that determines where specific treatments must be placed.

Contact me if any of the steps are confusing. And we can talk by voice via the relayed info so that you can quickly get this done with a minimum of frustration and a maximum f understanding and the ability to actually see and understand the results. Also, to this end, you can email me the .mdat files and I will be glad to convolve them and help you quickly zero in o the issues with a few examples.


The process is very simple and straightforward one you can visualize what is going on – sometimes a bit difficult to do when hearing this process described in words instead of a simple demo.




Attached is the procedure for the string method, but note that for our purposes with REW, since we do not know the actual time of flight (the time before T=0 that consists of the time it takes for the direct arriving stimulus to travel from source speaker to the mic, that we must carefully and accurately measure this from the acoustic center of the source (for our purposes, the geometric center of the speaker baffle – NOT the tweeter!), to the mic capsule, measured as precisely as possible!

This distance must be added to each arrival time (converted to distance) for each signal arrival in order to determine their TOF. Once that TOF is determined, the general method may be followed.

As the blocking method is an empirical method and uses the display only to help us determine general guideline info, the TOF is not needed. This method is used when we simply desire to determine the vector path and point of boundary incidence in order to determine where treatment should be placed if treatment is desired for the corresponding indirect signal. In other words, if we successfully block a signals specular pathway, the spike corresponding to the energy gain will be diminished.
 

·
Premium Member
Joined
·
6,711 Posts
The first concern revolves around the fact that in acoustics we use the dB SPL scale. I know of no equivalency between dB SPL references measured in dynes/cm^2 and ‘decibels relative to full scale’, commonly abbreviated dBfs, which measures decibel amplitude levels in digital systems such as pulse-code modulation which have a defined maximum available peak level and where 0 dBFS is assigned to the maximum possible digital level. Both are useful within their own appropriate spheres of influence. REW would benefit significantly from the ability to be able to selectively toggle between the two gain scales and to have one or the other to be selectively designated as the default based upon use.
dB FS and dB SPL differ only by an offset, i.e. a 10 dB change on the dB FS scale is a 10 dB change on the dB SPL scale. I'm not sure that adding the offset to provide an SPL value brings a benefit in the interpretation of the data.

The other concern is that the default time assignment axis and direct signal view does not correspond to the actual arrival time of the source stimulus. In short, the arrival time displayed on the X axis time display does not correlate directly to the actual stimulus signal arrival time – the TOF.
If the Analysis preferences box to use a loopback as a timing reference is checked all times are shown relative to the arrival of the loopback signal, so in the typical case where the loopback is connected at the soundcard input/output the times include the acoustic time of flight and any processing delays in equipment connected after that point.

If this capability were provided, (assuming one has performed the NECESSARY HW propagation/latency loopback in order to account for the latency overhead that is imposed by the hardware system in order to achieve accurate time display), one could then optionally translate the X axis values and assign T=0 for any point as desired, should they need such niche capability. This feature, while nice is a special purpose function, useful for niche processing purposes, such as for determining the time delta between different signal arrivals useful for time offset delay line settings between drivers or array for signal alignment purposes.
The impulse graph includes controls to shift the t=0 reference point by a user-selectable amount. Time and amplitude deltas can also be measured from the graph by pressing the ctrl key and the right mouse button and dragging.
 

·
Banned
Joined
·
516 Posts
Thank you so VERY MUCH John for that info about the TOF/loopback display and the additional time delta displays!!!! :T

Now to make all aware of this!!!!:eek:lddude:
 

·
Banned
Joined
·
516 Posts
Thanks for posting the .mdat files.

Also, have you noted John's helpful very post regarding the use of the the loopback for hardware propagation delay correction and then selecting the 'Using loopback' option in the Analysis preferences?

The use of this option will enable a precise determination of the direct readout of each Time Of Flight (TOF) value for each energy arrival.

Henceforth this should be considered a prerequisite requirement for any sort of analysis of indirect signals.

That said, attached you will find a bunch of snap shots of various energy arrivals marked by cursor with the arrival times listed on the X (time) axis.

Per the earlier posted procedures (and the ones sent via email), I would suggest doing a few string method resolutions in order to identify the flight path, which will necessarily provide the direction of travel of each specular path, and also the point of boundary incidence where treatment would be applied. The type of treatment applied would depend upon the nature of the reflection (arrival time and gain) in context with the overall acoustical response model you wish to achieve.

All relatively abstract thus far, but that pretty much sums up 'the process'.

(From a practical perspective...Frankly, my suggestion would be forgo using the data provided partly by the ETC and partly by trying to physically measure the distance from speaker source to mic. That simply introduces too much error, and quite a bit of 'mistake'! I would serious suggest repeating the two easily made impulse /ETC responses using the hardware loopback, and then directly read the arrival times from the ETC response and take advantage of the increased accuracy and precision afford by the measurement platform.)

After you have done a couple string path determinations in order to get a good visual grasp of the concept, I would then jump to the blocking technique.

In order to get this down, and to ACCURATELY get a grasp of dealing with the tricks of this process, I would work on blocking and identifying a major very distinct reflection such as one at ~7.73ms, and then after you successfully walk it back to the boundary, verify the path agreement with the string method. This practice will insure that you are properly interpreting the graphical results and insure that your effort is returning maximum value for your effort.

Each time you do this, mark and LABEL(!) each boundary incidence pint with a piece of blue painters tape, uniquely annotated on a printout of the ETC with the unique reflection number.

Regarding the very early energy arrivals:

This will be a bit difficult determining the precise point of origin with the blocking method,as multiple pints will be potentially clustered in a very small region. Additionally, some may be off reflective surfaces such as the chair back, or off the work surface. But some can be a result of diffractive irregularities in the boundary plane near the source speaker. You might try employing the string method for these. With a bit of experience, and an awareness of the physical characteristics that cause diffraction, and experienced based pattern recognition will kick in and greatly speed the process of identifying these causal sources. So it does become easier as you learn what the various causes of the anomalous behavior look like.

How is that for a start?
Again, if you step through this and experience any confusion, feel free to PM me. And if you have YM or Skype and a headset, or at last resort, a North American telephone number, I will be glad to interactively try to lend a hand and make the process as painless as possible.

I do believe that you will quickly get the hang of these methods. And you will see that the verbal description s much more awkward and confusing than is the actual process.

Next time, after you have an opportunity to complete an inventory of the various incident points for each of the 'significant' indirect energy paths, we can begin to analyze and asses them for optimal methods of treatment, and get an idea as to how the plot can help us determine options for the types of treatment you will consider using relative to the desired acoustical response goal you desire to achieve in the space.




















 

·
Registered
Joined
·
151 Posts
Discussion Starter · #20 ·
Hey everyone...sorry about the lack of response and inability to focus on this. My family owns a restaurant and we lost our head chef which means Ive had to step in and work two jobs. I appreciate everyone's assistance and will dedicate my full focus and time on this when it's available!!!

THX!!!! Be back soon!
 
1 - 20 of 22 Posts
Top