V5.20 Beta 27 Vector Averaging Question, possible bug report

G29

Member
Thread Starter
Joined
Jun 20, 2019
Messages
79
I was using Vector Averaging for the first time to learn the feature and noticed something odd.

If I vector averaged 3 copies of the same measure with the FDW turned on, the vector average's Spectral Decay is much faster than the duplicated original.

The 160ms slice/plane cut went from 52.3dB @ 9.3KHz in the duplicate to -57.4dB @ 9.3KHz in the vector average. I was expecting it remain similar to the original duplicates.

Is this to be expected ? It looks like the 160ms is now closer to the 300ms slice/plane cut.

UPDATE: The Waterfall Plot rolloffs are also much faster with the vector average than the original duplicates.

This is the original measurement with FDW that was duplicated 3 times.

Me3bOMY.jpg


Here is the vector average of the 3 duplicates showing the faster decay.

19K4CCB.jpg


TIA
 
Last edited:

John Mulcahy

REW Author
Joined
Apr 3, 2017
Messages
7,297
You aren't comparing like with like. Waterfalls and decays are generated by moving a (conventional, not frequency dependent) window along the captured impulse response. Applying a frequency dependent window to the measurement doesn't change the original impulse response, so it doesn't have any effect on waterfalls or decays (you can see that by applying/removing the FDW and regenerating the waterfall). Trace arithmetic behaves a little differently, since it operates on windowed data - the impulse responses being vector averaged are not the original impulse responses that are used when generating waterfalls from the original data. If a waterfall of a frequency-dependent windowed set of data is desired then trace arithmetic can be used to make a suitable response for that, which is in effect what you have done.
 

G29

Member
Thread Starter
Joined
Jun 20, 2019
Messages
79
You aren't comparing like with like. Waterfalls and decays are generated by moving a (conventional, not frequency dependent) window along the captured impulse response. Applying a frequency dependent window to the measurement doesn't change the original impulse response, so it doesn't have any effect on waterfalls or decays (you can see that by applying/removing the FDW and regenerating the waterfall). Trace arithmetic behaves a little differently, since it operates on windowed data - the impulse responses being vector averaged are not the original impulse responses that are used when generating waterfalls from the original data. If a waterfall of a frequency-dependent windowed set of data is desired then trace arithmetic can be used to make a suitable response for that, which is in effect what you have done.

John,

You lost me up there.

Here are the steps I conducted:
  1. Take a single channel measurement with FDW previously having been enabled in "Preferences/Analysis".
  2. Save the measurement off to the hard drive.
  3. Load the same measurement 3 times (3 identical duplicates).
  4. Use the "ALL SPL" window to "Time Align" and then "Vector Average" the 3 duplicate measurements.
  5. Finally, look at the plots (including Decay and Waterfall) of the duplicates and the vector average. Duplicates are identical, vector average is different (specifically the Decay and Waterfall, appears to decay 2X as fast as the original).
I was expecting averaging 3 copies of the same measurement would average to the same measurement ???

NOTE: The original measurement was done at 48KHz.

UPDATE: Created a small test .mdat file with the 3 duplicates and the vector average that decays at twice the rate.
 

Attachments

  • Duplicate Vector Average Deltas.mdat
    3 MB · Views: 6
Last edited:

John Mulcahy

REW Author
Joined
Apr 3, 2017
Messages
7,297
Here is the relevant help entry from the trace arithmetic notes:
  • The currently applied impulse response window settings are used for each trace. The result uses the same window settings as trace A unless the operation was Merge B to A, in which case the window settings for trace B (the low frequency portion) are used. Any frequency-dependent settings are excluded, applying an FDW to the result would amount to applying the window twice, as it is already applied to the data used to produce the result.
It is the windows which complicate matters, and the frequency dependent window in particular. Averaging the result of applying a window to a measurement is not averaging the original measurement. A conventional window is typically used to exclude portions of the measurement impulse response which are not of interest, such as regions that have decayed below the measurement noise floor or regions before the response starts. Little is lost by excluding them. A frequency dependent window uses a progressively narrower section of the impulse response to generate the result as frequency increases. It provides a way of viewing SPL (magnitude) and phase while increasingly excluding the influence of high frequency reflections. The windowed result is a view which intentionally excludes portions of the measurement. A waterfall plot of the average of FDW results no longer shows what is happening in the response over time as the FDW has removed much of the response at higher frequencies, so there is no longer anything there. The same would apply if a narrow conventional window was applied, there would be nothing in the result outside that window and so a waterfall plot would no longer show anything there.
 
Top Bottom