FSAF (Fast subband adaptive filtering) measurement

A dense multitone signal, like this 48 tone/oct one generated in REW, wouldn't be "broadband" enough?

Unbenannt.png
 
I'm a fan of M-Noise for general noise stimulus that resembles audio spectrum, a lot better than pink noise anyway. I think Stoneeh's trying to compare result of FSAF to a multitone IMD type of measurement, so needs to keep the magnitude spectrum and crest factor similar or it's not a very equal comparison.
 
The only reason for the existence of simplified tests like sine sweep and multi-tone IMD tests is their low computational load. Their interpretation in terms of music is ... debatable. Many self-proclaimed experts claim that by looking at FR, harmonics, and spinograms, they can predict how a loudspeaker will sound in a given room. Others call it snake oil.

FSAF comes from the opposite direction. It allows you to separate distortions from the original, on the music you listen to. Then you are free to come up with simplified tests that are consistent with your perception of the distortions that are meaningful to you (because each person is unique hearing-wise). Not vice versa.
 
What level of explanation do you need? PhD or 5 years old?

The "original" does not refer to master tapes or acoustic fields in the recording studio. It refers to the digital recordings available to the end-user. It is not a problem to equalize your acoustic system to a picture-perfect, ruler-flat FR (say, 1m on the axis, integrated over a sphere by an omnidirectional mic). This signal is supposed to be a linear-time-invariant (LTI) transformation of the digital recordings - but it is not. The differences from the LTI are what FSAF sees as distortions. Does this explanation suffice?
 
I guess 5 years old is my level, since @dcibel made me understand lol
thanks to both


is not a problem to equalize your acoustic system to a picture-perfect, ruler-flat FR (say, 1m on the axis, integrated over a sphere by an omnidirectional mic). This signal is supposed to be a linear-time-invariant (LTI) transformation of the digital recordings - but it is not. The differences from the LTI are what FSAF sees as distortions. Does this explanation suffice?

now this was very usefull, thanks
 
John, if the FSAF measurement was done from a file, what would the TD+N distortion graph show, the sum of all distortions plus noise? If the signal received by the microphone differs from the file, how can I highlight this difference on the graph? Maybe the TD+N graph is the difference graph?
 
Distortion graph for FSAF is a spectrum of the residual. So its total distortion + noise for the duration of the audio, regardless of what the input audio is. For more detailed analysis, listen to the residual audio, and/or "load FSAF residual", then view in spectrogram.
 
Back
Top