I am setting up a 3-way 2-channel stereo setup using Audiolense XO. I have generated two sets of correction filters using the same measurement and target, one with the option "Minimum Phase Xover" checked, and one unchecked, which I presume implies linear phase. The simulated frequency responses are similar. However, when I'm listening to them, the version without the "Minimum Phase Xover" option will work correctly at sample rates multiple of 48kHz, and the version with the "Minimum Phase Xover" option checked will work correctly at sample rates multiple of 44.1kHz.
Playback is using with the Audiolense convolver 1.6, hence only 1 alc file. When it's not working correctly, you can hear that phases are all wrong, there is no center image for vocal, some of the bass got cancelled etc.
In the filter generation process, I have selected all samples rate, i.e. 44.1K, 48K, 88.2K, 96K, 176.8K, 196KHz. The original measurement was done at 192kHz in Audiolense.
Is there any additional setting I need to set in order to generate both the "linear phase" and the "minimum phase" XO correctly?
Playback is using with the Audiolense convolver 1.6, hence only 1 alc file. When it's not working correctly, you can hear that phases are all wrong, there is no center image for vocal, some of the bass got cancelled etc.
In the filter generation process, I have selected all samples rate, i.e. 44.1K, 48K, 88.2K, 96K, 176.8K, 196KHz. The original measurement was done at 192kHz in Audiolense.
Is there any additional setting I need to set in order to generate both the "linear phase" and the "minimum phase" XO correctly?