## Frequency Domain Signal Processing

John Edwards - **Watch Now**

**DHMarinov**

**john.edwards**Speaker

Hi,

Thank you for your nice comments.

Yes, this would be entirely possible but the important thing is that the FFT length >= N+M-1, as per the slides.

There is potentially one problem however. If this is a live real time application (e.g. music recording) then the additional latency of te FFT block processing might be too much.

Best regards,

John

**lason**

Loved it ultipro

**john.edwards**Speaker

Thanks lason, much appreciated.

John

**HardRealTime**

Great talk! Forgive my inexperience, but I have a question about the frequency domain interpolation. Perhaps the answer is in the spatial resolution formula, but I don't understand it. How many additional points would be added to the time domain using this interpolation technique?

**john.edwards**Speaker

Hi,

The number of interpolation points is theoretically unlimited, it just depends on how much zero padding you apply.

In practice, this is a law of diminishing returns but up to 8x or 16x will be fine.

**stefg**

Thank you very much,

Very interesting and directly in line with what I'm doing, you reminded me that I need to know more about the CIC filters.

**john.edwards**Speaker

Thank you, much appreciated.

**john.edwards**Speaker

I have added some more answers to the live discussion thread : https://www.dsponlineconference.com/meeting/Live_Discussion_Frequency_Domain_Signal_Processing

**Brewster**

Hi John, You mentioned defaulting to B-H windows. In high resolution systems (like > 120 dB dynamic range) there is (might be?) the concern of "high" sidelobes. So now I see Dolph (& related) windows being used to get sidelobes down to almost arbitrarily low levels. OTOH there's no such thing as a free lunch - what are your thoughts on window functions for FFT in this type of high dynamic range use?

**john.edwards**Speaker

Hi Brewster,

You are correct. In general signal processing applications (voice etc.) I tend to use the Hanning Window, which has a good trade-off between mainlobe spreading and sidelobe magnigude. For higher resolution applications like radar, sonar and ultrasound I find the Blackman-Harris gives a bit more mainlobe spreading but lower sidelobes. I agree that the Dolph window can provide even better sidelobe performance, at the cost of mainlobe spreading.

The best thing to do is test each window against a specific applcation requirement and see which fits best

**ChrisBore**

This is the FFT Nlog2(N) efficency?

**john.edwards**Speaker

Absolutely correct, Chris

**christophe.blouet**

Thanks for this presentation, very inspiring. I'm sure I'll get use of some of the techniques presented

**john.edwards**Speaker

Thanks Christophe, much appreciated.

**AllenDowney**

Very informative talk. Thank you!

**john.edwards**Speaker

Thanks Allen, much appreciated.

Good luck with your talk, I'm looking forward to it.

**Danilo**

Great overview.

**john.edwards**Speaker

Thanks Danilo, much appreciated.

**ChrisBore**

Fascinating, thank you

**john.edwards**Speaker

Thanks Chris, much appreciated.

Yours was excellent too :-)

**Brewster**

Hi John, I know your talk was more on 'processing' via FFT but I figure I'll ask this anyway...I still feel unsure when I'm faced with an application where estimating the exact peak and the exact frequency are needed, or when there are frequencies that are close to each other. Do you have a good pointer to an "idiots guide to FFT use" that might provide a good summary of what technique(s) are best for this type of application?

**john.edwards**Speaker

Hi Brewster,

Good question. Yes, a common method for achieving this is to use interpolation in the Frequency domain to improve the peak detection. This is the technique used in the ultrasound application I presented.

Another thing to note is that in the scenario where the frequencies are close and also have similar magnitude is to not window the input data because windowing spreads the main lobe energy so can be detremental in these scenarios.

**AaronEdwards**

Thanks John. As someone who works with Time Domain processing a lot in my day to day work, this presented a lot of good trade-offs and was very informative!

**john.edwards**Speaker

Thanks Aaron, much appreciated.

**woodpecker**

Hi John, A great talk - Thanks ! My feeling is that far too much AI is working on raw data, whereas even a little DSP pre-processing would reduce the MIPs by orders of magnitude.

I look forward to more of your informative lectures.

**john.edwards**Speaker

Thanks very much for your comments.

Yes, I totally agree. I have been doing a lot of work into Machine Learning and using the frequency domain to improve the performance of CNNs. I have a paper that will be presented to the TinyML Foundation shortly so please keep an eye out. If Stephane runs this event next year I will gladly present more details then.

**jonbram**

John, Great presentation, but one question. In the implementation of filtering in the frequency domain, does the choice of Windowing Function in the time domain effect the filter response of the Filtering in the frequency domain?

**john.edwards**Speaker

Excellent question. If you are using one of the overlap methods then the iFFT on the output unrolls the windowing effects from the input so you do not need to use a window.

**jonbram**

Thanks.

Hello John,

The presentation was great! I have a question regarding the application of FFT as a filter.

I am trying to implement a convolutional reverb - this is basically a very long FIR filter, whose coefficients are the impulse response of an physical environment. Unlike the filtering aspect of most FIR filters, this application is also dependant on the propagation delay of the filter (i.e. it is a desirable property).

Do you think it is possible to implement the convolutional reverb using FFTs and preserving the exact same response, if so how would you do it?