SPTK: Ideal Filters

Ideal filters have rectangular or unit-step-like transfer functions and so are not physical. But they permit much insight into the analysis and design of real-world linear systems.

Previous SPTK Post: Convolution       Next SPTK Post: The Moving-Average Filter

We continue with our non-CSP signal-processing tool-kit series with this post on ideal filtering. Ideal filters are those filters with transfer functions that are rectangular, step-function-like, or combinations of rectangles and step functions.

[Jump straight to ‘Significance of Ideal Filters in CSP’ below.]

A filter is a designed linear time-invariant system. The goals of filtering are:

  1. Remove unwanted signal and/or noise energy while preserving wanted signal energy, or
  2. Restore a signal to a proscribed condition.

Filters are categorized in terms of their frequency response function. We refer to a frequency component with frequency f_0 as being passed by the filter if the corresponding value of the transfer function H(f_0) is non-zero, and refer to it as being rejected otherwise. Those filters that pass low frequencies but do not pass high frequencies are lowpass filters, those that pass high frequencies but do not pass low frequencies are highpass filters, and those that pass frequencies only in some middle range are bandpass filters. Less common are those that pass all frequencies except those in some narrow range, which are notch filters (or bandstop filters), and those that pass all frequencies but adjust their phases, which are called all-pass filters.

Ideal Lowpass Filters

A lowpass filter is one that passes only low-frequency sine-wave components in the input signal, and rejects or significantly attenuates all others. This is most easily viewed in the frequency domain through the filter’s frequency response (transfer function), as shown in Figure 1.

Figure 1. (a) Ideal lowpass filter (LPF) transfer function, and (b) a more realistic lowpass filter transfer function. The ideal filter has non-physical infinitely sharp transitions between the passband ([-f_c, f_c]) and the stop bands ((-\infty, -f_c] and [f_c, \infty)).

For the ideal LPF, the frequency interval [-f_c, f_c] is the passband, the frequency f_c is called the cutoff frequency, the height of the passband is the gain G, the passband width, or just bandwidth, of the filter is 2f_c, and the two frequency intervals (-\infty, f_c] and [f_c, \infty) are the stopbands.

Let’s also look at the impulse-response function for the ideal LPF, which is simply the inverse transform of the frequency-response function,

\displaystyle h_{LPF}(t) = {\cal F}^{-1} \left[ H_{LPF}(f)\right] \hfill (1)

\displaystyle = {\cal F}^{-1} \left[ G\, \mbox{\rm rect}(f/2f_c) \right] \hfill (2)

\displaystyle = 2f_cG \, \mbox{\rm sinc}(2f_ct). \hfill (3)

The impulse-response function for the ideal LPF is shown in Figure 2. Recall that a causal linear time-invariant (LTI) system (filter) must have an impulse response that is zero for t < 0, which is clearly not true of the function in Figure 2. Therefore, the ideal LPF is not realizable, which means constructable or buildable. You can’t actually make it in the real world, because part of the response to the input impulse at t=0 occurs before the impulse is applied to the system.

But suppose we are willing to tolerate a delay between the time we apply an input signal to the ideal LPF and the time we begin to see an output–perhaps a large delay.

Figure 2. The impulse-response function for an ideal lowpass filter. The shape of the impulse response is a \mbox{\rm sinc} function, which is the Fourier transform of the rectangular-shaped transfer function in Figure 1 (a).

Then we can simply delay the impulse response so that most of its energy lies to the right of t = 0, as shown in Figure 3. The impulse-response function for the delayed ideal LPF is then

\displaystyle h^\prime_{LPF}(t) = h_{LPF}(t - T_d), \hfill (4)

which is illustrated in Figure 3 for T_d = 1/f_c.

Figure 3. Impulse-response function for a time-delayed ideal lowpass filter. This is simply the impulse response in Figure 2 delayed in time so that most of its energy lies to the right of t=0.

For the delayed-output filter, the energy in the impulse response for t<0 can be made as small as desired by choosing T_d large enough. Therefore, a noncausal ideal LPF can be converted to a causal one, approximately, by the use of a large delay. What is the transfer function (frequency response) for the delayed-output impulse response?

\displaystyle H^\prime_{LPF}(f) = \int_{-\infty}^\infty h^\prime_{LPF}(t) e^{-i2\pi f t}\, dt \hfill (5)

\displaystyle = \int_{-\infty}^\infty h_{LPF}(t-T_d) e^{-i2\pi f t}\, dt \hfill (6)

\displaystyle = e^{-i2\pi f T_d} \int_{-\infty}^\infty h_{LPF} (t) e^{-i2\pi f t} \, dt \hfill (7)

\displaystyle = e^{-i 2 \pi f T_d} H_{LPF}(f). \hfill (8)

So we see that the magnitudes of H_{LPF}(f) and H^\prime_{LPF}(f) are the same, but their phases differ. By construction, H_{LPF}(f) is real, and so it has a phase of zero. Therefore, the phase of the delayed-output system is

\displaystyle \angle H^\prime_{LPF}(f) = \angle e^{-i 2 \pi f T_d} = -2\pi f T_d \hfill (9)

which is a linear function of frequency f. Therefore, linear transfer-function phase is equivalent to a constant delay in time between the input and the output of the system.

Another way to bridge the gap between ideal and practical filters is to take a cue from the ideal LPF impulse response: it is largely concentrated near t=0. So let’s create a causal impulse response function with all of its energy near t=0. Does the resulting filter correspond to a lowpass type of frequency response? The impulse response is shown in Figure 4. To find its frequency response, we need to Fourier transform it.

Figure 4. The impulse-response function for an ideal moving-average filter, which is a kind of lowpass filter. Because the impulse response is zero for t<0, this filter is practical and can be built, unlike the non-causal filter represented by Figure 4.
Figure 5. The magnitude of the transfer function for the moving-average filter shown in Figure 4.

\displaystyle H(f) = \int_{-\infty}^\infty h(t) e^{-i 2 \pi f t}\, dt \hfill (10)

\displaystyle = KT_c\, \mbox{\rm sinc}(fT_c) e^{-i\pi f T_c}. \hfill (11)

Figure 6. A step-function input signal u(v).
Figure 7. Two different time-reversed and shifted step-function input signals.

Looking at the magnitude of H(f) in (11) (Figure 5), we see that the answer is yes, this is indeed a lowpass type of filter. This filter is called the moving-average (MA) filter because it can be interpreted as adding up the previous T_c worth of the input-signal values and multiplying that result by K, which could be set to 1/T_c to form an average,

Figure 8. The moving-average impulse-response function as a function of the dummy variable v.

\displaystyle y(t) = \frac{1}{T_c} \int_{t-T_c}^{t} x(u) \, du. \hfill (12)

What is the step response for the MA filter? The step-function input signal is shown in Figure 6. We need to convolve it with the MA impulse response, which we can do with the aid of the plots shown in Figures 7 and 8,

\displaystyle y(t) = \int_{-\infty}^\infty h(v) u(t-v)\, dv \hfill (13)

\displaystyle = \int_{-\infty}^{t} h(v)\, dv \hfill (14)

\displaystyle = \int_{o}^{t} h(v)\, dv. \hfill (15)

Figure 9. The unit-step response for the moving-average filter with impulse-response function shown in Figure 8.

For t<0, we have y(t) = 0. For t \ge 0 and t \leq T_c we have

\displaystyle y(t) = \int_0^t K \, dv = Kt, \hfill (16)

and for t > T_c we have

\displaystyle y(t) = \int_0^{T_c} K \, dv = KT_c. \hfill (17) 

The full step response is shown in Figure 9. Try to interpret it in terms of a running, or ‘moving,’ average operation on the step-function input.

Ideal Highpass Filters

A highpass filter (HPF) is one that passes (allows passage through the system with non-zero amplitude) only sine waves with relatively large (high) input frequencies and rejects (assigns a zero or very small amplitude to) all other frequencies. In the frequency domain, we have the transfer function shown in Figure 10(a), and we see that it is the complement of the ideal LPF transfer function: where one is zero, the other is not, where one is non-zero, the other is zero.

The HPF cutoff frequency is f_c, the passband gain is G, the filter bandwidth (width of the passband) is infinite, and the stopband is [-f_c, f_c].

The HPF transfer function can be written in terms of the LPF transfer function as

\displaystyle H_{HPF}(f) = G - H_{LPF}(f) \hfill (18)

Figure 10. (a) Ideal highpass filter (HPF) transfer function, and (b) a more realistic highpass filter transfer function. The ideal filter has non-physical infinitely sharp transitions between the stopband ([-f_c, f_c]) and the pass bands ((-\infty, -f_c] and [f_c, \infty)).

We can use (18) to easily derive the ideal HPF impulse-response function,

\displaystyle h_{HPF}(t) = {\cal F}^{-1} \left[ H_{HPF}(f) \right] = {\cal F}^{-1} \left[ G - H_{LPF}(f) \right] \hfill (19)

\displaystyle = G\delta(t) - 2Gf_c\, \mbox{\rm sinc}(2f_ct), \hfill (20)

which is shown in Figure 11.

Figure 11. The impulse-response function for the ideal highpass filter with transfer function shown in Figure 10. Note the presence of an impulse function at t=0 and the inverted lowpass filter impulse-response function.

Not surprisingly, the ideal HPF is also not causal. We could introduce a delay to create a good approximation to the ideal HPF, as we did for the ideal LPF.

Ideal Bandpass Filters

A bandpass filter (BPF) is one that passes only those input frequency components that fall inside a specific band of frequencies, such as [f_1, f_2] = [f_0-f_c, f_0+f_c]. An ideal BPF has rectangular passbands, as shown in Figure 12(a).

The transfer function of an ideal BPF can be expressed in terms of just a single ideal LPF transfer function or by combining an ideal LPF transfer function with an ideal HPF transfer function. Let’s go through those two options next.

Figure 12. (a) Ideal bandpass filter (BPF) transfer function, and (b) a more realistic bandpass filter transfer function. The ideal filter has non-physical infinitely sharp transitions between the passbands ([-f_0-f_c, -f_0+f_c] and [f_0-f_c, f_0+f_c]) and the stopbands ((-\infty, -f_0-f_c], -f_0+f_c, f_0-f_c], and [f_0+f_c, \infty)).

Ideal BPF in Terms of Ideal LPF

Recall that the Fourier transform of the ‘modulated’ signal x(t)e^{i2\pi f_0 t} is the shifted transform of the signal x(t) itself

\displaystyle x(t) \Longleftrightarrow X(f) \hfill (21)

\displaystyle \Rightarrow x(t)e^{i2\pi f_0 t} \Longleftrightarrow X(f-f_0). \hfill (22)

We can form the ideal BPF transfer function H_{BPF}(f) by adding together two frequency-shifted ideal LPF transfer functions, as shown in Figure 13,

\displaystyle H_{BPF}(f) = H_{LPF}(f+f_0) + H_{LPF}(f-f_0). \hfill (23)

The impulse response is therefore given by

\displaystyle h_{BPF}(t) = h_{LPF}(t)e^{-i2\pi f_0 t} + h_{LPF}(t) e^{i 2 \pi f_0 t} \hfill (24)

\displaystyle = h_{LPF}(t) \left[ e^{-i 2 \pi f_0 t} + e^{i 2 \pi f_0 t} \right] \hfill (25)

\displaystyle = 2h_{LPF}(t) \cos(2\pi f_0 t). \hfill (26)

Figure 13. Illustration of the creation of the ideal-BPF passbands from a frequency-shifted ideal lowpass filter passband.

Ideal BPF in Terms of an Ideal LPF and an Ideal HPF

To form the ideal BPF, we multiply the transfer functions for a particular LPF and HPF, as shown in Figure 14. The cutoff frequencies for the two component filters are such that when the transfer functions are multiplied, only two frequency intervals of the result correspond to nonzero values of frequency response. The frequency response is simply

\displaystyle H_{BPF}(f) = H_1(f) H_2(f). \hfill (27)

Using the convolution theorem, we can express the corresponding impulse response as the convolution of the impulse responses for the two component filters,

\displaystyle h_{BPF}(t) = h_1(t) \otimes h_2(t) \hfill (28)

\displaystyle = 2(f_0+f_c)G \, \mbox{\rm sinc}(2(f_0+f_c)t) \otimes \left[ \delta(t) - 2(f_0-f_c)\, \mbox{\rm sinc}(2(f_0-f_c)t) \right] \hfill (29)

\displaystyle = 4f_cG\, \mbox{\rm sinc}(2f_ct) \cos(2\pi f_0 t), \hfill (30)

where we used the previous results instead of directly evaluating the formula (thankfully!).

Figure 14. Illustration of the creation of the ideal bandpass filter transfer function through the multiplication of an ideal lowpass transfer function by an ideal highpass transfer function: H_{BPF}(f) = H_1(f)H_2(f).

Ideal Bandstop Filters

An ideal bandstop filter (BSF) is one that rejects frequency components in some band of frequencies centered at f_0 with width 2f_c, and passes all others–the complement of the BPF. When the passband width 2f_c is very small compared to the center frequency f_0, this filter is also called an ideal notch filter.

Figure 15. Ideal bandstop filter (BPF) transfer function. The ideal filter has non-physical infinitely sharp transitions between the stopbands ([-f_0-f_c, -f_0+f_c] and [f_0-f_c, f_0+f_c]) and the passbands ((-\infty, -f_0-f_c], -f_0+f_c, f_0-f_c], and [f_0+f_c, \infty)).

Using similar analysis methods as for the previous ideal filter types, we can write expressions for the ideal BSF transfer function and impulse response as

\displaystyle H_{BSF}(f) = G - H_{BPF}(f) \hfill (31)

\displaystyle h_{BSF}(t) = G\delta(t) - 2h_{LPF}(t)\cos(2 \pi f_0 t). \hfill (32)

Ideal Allpass Filters

Finally, the ideal allpass filter (APF) passes all frequency components in the input with a common change to their amplitudes and no change to their phases, as shown in Figure 16.

\displaystyle H_{APF}(f) = G. \hfill (33)

Figure 16. The transfer function for the ideal allpass filter.

The impulse response is just an impulse,

\displaystyle h_{APF}(t) = {\cal F}^{-1} \left[ H_{APF}(f) \right] \,\, (34)

\displaystyle = {\cal F}^{-1} \left[ G \right] = G \delta (t) \,\, (35)

A slightly more general APF includes a delay, so that the impulse response function is

\displaystyle h_{APF}(t) = G \delta(t-D).

Representation in Terms of a Prototype LPF

To drive home the relationships between the various kinds of ideal filters we’ve defined and analyzed in this post, let’s unify them by representing them in terms of a single frequency-response function: a prototype lowpass filter. The prototype LPF is shown in Figure 17. It is merely a rectangle with unit height and unit width. Let’s see how we can use this single function to represent all the different ideal filters (except the APF).

Figure 17. A prototype ideal lowpass filter transfer function. The passband has width 1 Hz and the passband gain is unity.

An ideal LPF with cutoff frequency f_c and unity gain is a frequency-scaled version of our prototype filter L(f),

\displaystyle H_L = L(f/2f_c) = \mbox{\rm rect}(f/2f_c). \hfill (36)

If the gain of the LPF is G, we simply multiply the frequency-scaled prototype by G,

\displaystyle H_L(f) = G L(f/2f_c). \hfill (37)

An ideal highpass filter with cutoff f_c is just a constant minus the ideal lowpass filter,

\displaystyle H_H(f) = G - H_L(f) = G [1 - L(f/2f_c)]. \hfill (38)

Similarly, any ideal BPF is expressed as

\displaystyle H_{BP}(f) = H_L(f-f_0) + H_L(f+f_0) \hfill (39)

\displaystyle = G \left[ L\left(\frac{f-f_0}{2f_c}\right) + L\left(\frac{f+f_0}{2f_c}\right) \right], \hfill (40)

and any ideal bandstop filter is

\displaystyle H_{BS} (f) = G - H_{BP}(f) \hfill (41)

\displaystyle = G \left[ 1 - L\left(\frac{f-f_0}{2f_c}\right) - L\left(\frac{f+f_0}{2f_c}\right) \right]. \hfill (42)

Let B = 2_fc, which is the width of the LPF, the widths of the passbands for the BPF, and the width of the stopband for the HPF. Then we can construct the following table of transfer functions in terms of L(f)

Filter NameTransfer Function
Ideal LPFH_L (f) = GL(f/B)
Ideal HPFH_H(f) = G[1 - L(f/B)]
Ideal BPFH_{BF}(f) = G[L((f-f_0)/B) + L((f+f_0)/B)]
Ideal BSFH_{BS}(f) = G[1 - L((f-f_0)/B) - L((f+f_0)/B)]
Table 1. Ideal filters expressed as a function of a single prototype lowpass filter frequency response L(f) = \mbox{\rm rect}(f).

Examples

Clock Extraction

Suppose we have a periodic waveform s(t), with period T, and we want to extract from it a sinusoidal clocking signal for use in periodic switching of some piece of electronic equipment. One way to extract the desired sine wave is to use an ideal filter. Let’s see how.

The generic periodic signal is shown in Figure 18.

Figure 18. Two periods of a generic periodic signal with period T.

First we want to examine the periodic signal in the frequency domain. This will help us visualize the transfer function of an ideal filter that will extract the desired spectral component of s(t). That is, we’ll be able to see which spectral components we want to retain (pass through our filter) and which we want to discard (reject by our filter). Keep in mind that we can design a complete ideal filter by simply drawing rectangles and step functions.

Any periodic signal can be represented as a Fourier series, so we can immediately write an expression for s(t),

\displaystyle s(t) = \sum_{k=-\infty}^\infty c_k e^{i 2 \pi (k/T) t}, \hfill (43)

from which it is easy to compute the Fourier transform of s(t), since we know the Fourier transform of each of the complex sine-wave components in (43) is an impulse,

\displaystyle S(f) = \sum_{-\infty}^\infty c_k {\cal F} \left[ e^{i 2 \pi (k/T) t} \right] \hfill (44)

\displaystyle = \sum_{-\infty}^\infty c_k \delta(f - k/T). \hfill (45)

A graph of the Fourier transform (45) is shown in Figure 19. The Fourier-series coefficients c_k are generally complex-valued, but in the figure we represent them as real numbers, so sometimes the impulses are drawn downwards.

Figure 19. A schematic representation of the Fourier transform (43), which is the transform of the Fourier-series representation of the generic periodic waveform in Figure 18. Each bold arrow represents an impulse function.

The desired sinusoidal component has frequency equal to the fundamental frequency in the Fourier series, f_0 = 1/T. To get a real-valued sine-wave at the output of our filter, we’ll need to pass the components with frequencies \pm 1/T. Our ideal filter must not pass any other frequency components, which places a constraint on its passband width. In particular, it cannot be wider than 1/T. Figure 20 illustrates the placement of the desired ideal bandpass filter transfer function.

Figure 20. Illustration of an ideal bandpass filter transfer function that is capable of selecting only the desired sine-wave component when the input is the periodic signal shown in Figure 18.

The time-domain output of the filter is the convolution of the input with the impulse response, but it is preferable to work with the frequency-domain expressions at first. In that case, the output of the filter is the multiplication of the Fourier transform of the input with the transfer function,

\displaystyle Y(f) = S(f) H(f) \hfill (46)

\displaystyle = \left[ \sum_{k=-\infty}^\infty c_k \delta(f-k/T) \right] \left[ \mbox{\rm rect}\left(\frac{f-f_0}{f_0}\right) + \mbox{\rm rect}\left(\frac{f+f_0}{f_0}\right) \right] \hfill (47)

\displaystyle = c_1 \delta(f-1/T) + c_{-1}\delta(f+1/T) \hfill (48)

\displaystyle \Rightarrow Y(f) = c_1\delta(f-f_0) + c_{-1}\delta(f+f_0). \hfill (49)

Assuming s(t) is real-valued, we know that c_1 = c_{-1}^*. Inverse Fourier transforming Y(f) leads to

\displaystyle y(t) = {\cal F}^{-1} \left[ Y(f) \right] \hfill (50)

\displaystyle = \int_{-\infty}^\infty c_1\delta(f-f_0) e^{i 2 \pi f t}\, df + \int_{-\infty}^\infty c_{-1}\delta(f+f_0) e^{i2 \pi f t}\, df \hfill (51)

\displaystyle = c_1 e^{i 2\pi f_0 t} + c_{-1}e^{-i2\pi f_0 t} \hfill (52)

\displaystyle = c_1 e^{i 2\pi f_0 t} + c_1^*e^{-i2\pi f_0 t} \hfill (53)

\displaystyle = 2 |c_1| \cos(2 \pi f_0 t + \phi_1), \hfill (54)

where \angle c_1 = \phi_1.

Sonar Filtering

Suppose we want to receive and process a reflected sonar waveform. One problem is that the processing equipment and the environment add noise and potentially other signals to the desired signal. Our range and velocity estimates, obtained from the received waveform, are degraded in accuracy if excess noise is processed along with the reflected signal. So we would like to filter the received data to reject as much of the noise and interference as possible while maintaining the integrity and energy of the desired sonar waveform.

The situation is illustrated in Figure 21.

Figure 21. Illustration of a simplified sonar processing situation. From top to bottom: the Fourier transform of the received data, the Fourier transform of the noise-free sonar signal, and the Fourier transform of the noise.

\displaystyle x(t) = s(t) + n(t) \hfill (55)

\displaystyle X(f) \Longleftrightarrow x(t) \hfill (56)

\displaystyle S(f) \Longleftrightarrow s(t) \hfill (57)

\displaystyle N(f) \Longleftrightarrow n(t) \hfill (58)

\displaystyle X(f) = S(f) + N(f). \hfill (59)

An ideal bandpass filter with center frequency f_0 and bandwidth B will pass all of the desired signal energy and only the noise and interference and lies within the signal’s band of frequencies.

Figure 22. An ideal bandpass filter that can extract the sonar signal s(t) from the received data x(t) in Figure 21.

Equalization and Distortion

When a signal experiences a linear time-invariant system that is undesired and that adversely affects the signal properties from the point of view of the intended use of the signal, we say it has experienced linear distortion, or just distortion. In general, distortion occurs when the system output is not simply a scaled and time-shifted version of the input. That is, distortionless filtering corresponds to

\displaystyle y(t) = K x(t-D) = x(t) \otimes K\delta(t-D) \hfill (60)

When (60) is violated, distortion is present, we can consider designing a second linear time-invariant system that undoes the distortion. This operation is commonly called equalization. The situation is shown in Figure 23, which shows the serial connection of two LTI systems, one that represents the distortion h_d(t) and the next that represents the equalizer h_e(t).

Figure 23. Linear distortion is common in audio signal processing and in RF communication systems. An equalizer is used to attempt to undo the deleterious effects of the linear distortion.

How should we design the equalizer, which is represented in the time domain by its impulse response h_e(t) and in the frequency domain by the transfer function H_e(f) = {\cal F} [ h_e(t) ]?

We know that the serial connection of the two LTI systems is equivalent to an LTI system that has transfer function equal to the product of the two individual transfer functions,

\displaystyle H_{equiv}(f) = H_d(f)H_e(f). \hfill (61)

If the equivalent system is an allpass filter with linear phase, then the distortion is removed, because the overall effect is simply a scaling and a delay. For simplicity, let’s assume that the equivalent system has gain G=1 and zero delay, so that it has impulse response h_{equiv}(t) = \delta(t) and frequency response H_{equiv}(f) = 1. Then our equalizer design equation is

\displaystyle H_d(f) H_e(f) = 1, \hfill (62)

which appears to be much easier to solve than the corresponding time-domain expression

\displaystyle h_d(t) \otimes h_e(t) = \delta(t). \hfill (63)

We have

\displaystyle H_e(f) = \frac{1}{H_d(f)}, \hfill (64)

provided that H_d(f) \neq 0.

A practical approach to implementing (64) is to approximate H_e(f) by a set of ideal filters spaced evenly throughout the band of frequencies occupied by a typical input signal. Let’s illustrate this with pictures.

In Figure 24 we show the magnitude of H_d(f) for some generic equalization-problem setting.

Figure 24. Frequency-response magnitudes for a distorting filter H_d(f) and the ideal equalizing filter H_e(f). Here H_e(f) = 1/H_d(f).

We want to approximate H_e(f) by a piecewise-constant function. Each piece of the function will represent a different ideal filter. This approximation is illustrated in Figure 25.

Figure 25. Piecewise-constant approximation to the smoothly varying ideal equalizer transfer function. The equalization corresponding to f_1 can be done using an ideal LPF; ideal BPFs correspond to the remaining center frequencies f_j. Each filter has a passband width of W Hz.

Let each frequency interval have width W and center frequency f_j = jF/N + W/2, where N is the number of elements in the piecewise-constant approximation–the number of subbands that we will adjust.

An individual filter can be expressed mathematically as

\displaystyle H_j(f) = |H_e(f_j)| \left[ \mbox{\rm rect}((f-f_j)/W) + \mbox{\rm rect}((f+f_j)/W) \right]. \hfill (65)

The phase is typically important as well, and we can simply specify the phase of the ideal filter as the negative of the phase of the distorting filter at the center point of the filter’s passband. This phase-sensitive version of the equalizing filter is given by

\displaystyle H_j(f) = H_d^{-1}(f) \left[ \mbox{\rm rect}((f-f_j)/W) + \mbox{\rm rect} ((f+f_j)/W) \right]. \hfill (66)

Can this approximation lead to adequate equalization? How small does W = F/N need to be? Equivalently, how large does N need to be?

When the equalization is done by a human operator for audio signals, the equalizer is the familiar graphic equalizer found in many home and professional audio systems. In RF communication systems, the equalization must be done rapidly and automatically, and this represents a substantial area of active research and system design in electrical engineering.

Multipath Channels

When transmitting a radio wave over a wireless channel, the propagation of the wave cannot be completely controlled. It may be reflected, diffracted, refracted, etc. on the way to being received at the intended system. A common situation involves multiple reflected waves, which results in the sum of several scaled and delayed versions of the wave arriving at the intended system receiver. This is called multipath propagation, and is illustrated in Figure 26.

Figure 26. Illustration of multipath propagation.

Let the transmitted signal be denoted by x(t) and the received signal by y(t). From the geometry of the propagation situation, all reflected waves will travel over longer spatial paths than a direct wave, and so they each experience larger propagation delays relative to the direct wave. If the reflections are ideal (no amplitude or phase shift is experienced), then each propagation path results in a delayed version of the input signal x(t-d_k). A better model includes the possibility of the reflections changing the amplitude and/or phase of the signal, so that the propagation along the kth path results in a_k x(t-d_k), which we recognize as the output of an allpass filter with linear phase.

The received signal is then the sum of all of the reflections and the direct path, if any, which can be written as

\displaystyle y(t) = \sum_{k=1}^{N_r} a_k x(t-d_k). \hfill (67)

In (67), each element of the sum is called a multipath ray, or just ray, and the number of rays is N_r. Many multipath propagation channels can be modeled with N_r as small as two or three.

Notice that each ray can be expressed as the convolution of the input signal x(t) with the impulse response of an APF,

\displaystyle x(t) \otimes [a_k \delta(t-d_k) ] = a_k x(t-d_k). \hfill (68)

This implies that the total received signal y(t) is the convolution of the input signal x(t) with the sum of the individual-ray APF impulse-response functions,

\displaystyle y(t) = x(t) \otimes \left[ \sum_{k=1}^{N_r} a_k \delta(t-d_k) \right], \hfill (69)

where the multipath channel impulse response is given by

\displaystyle h(t) = \sum_{k=1}^{N_r} a_k \delta (t-d_k). \hfill (70)

A multipath channel is often characterized by the gross properties of the delays \{d_k\}. In particular, the typical (expected) range of delays is referred to as the delay spread, and plays a key role in determining whether the channel will adversely affect a particular communication signal. The delay spread is illustrated in Figure 27.

Figure 27. The multipath channel and its delay spread.

Let’s look at a two-ray multipath channel to get a feeling for multipath’s effects on a communication signal. Our simple model is

\displaystyle H_1(f) = a_1 + a_2e^{-i 2 \pi f d_2}, \hfill (71)

where we assume d_1 = 0. This model simplifies to

\displaystyle H_1(f) = a_1 [1 + b e^{-i2\pi f d_2} ], \hfill (72)

where b = a_2/a_1.

The squared magnitude of this transfer function is

\displaystyle |H_1(f)|^2 = |a_1|^2 \left| 1 + b e^{-i2\pi f d_2} \right|^2 \hfill (73)

\displaystyle = |a_1|^2 [(1 + |b|\cos(2\pi f d_2 - \phi_b))^2 + |b|^2\sin^2(2\pi f d_2 - \phi_b)]. \hfill (74)

For further simplicity, let a_1 = a_2 = 1. Then

\displaystyle |H_1(f)|^2 = (1 + \cos(2\pi f d_2))^2 + \sin^2(2\pi f d_2) \hfill (75)

\displaystyle = 2(1 + \cos(2\pi f d_2)). \hfill (76)

Let’s plot (76) for d_2 equal to 1 millisecond in Figure 28.

Figure 28. Transfer-function magnitude for a simple two-ray multipath model with delay spread of 1 ms.

The logarithmic plot of the transfer-function magnitude in Figure 28 reveals that the transfer function contains a zero–a frequency for which H_1(f) is zero, and it is also small for nearby frequencies. Since the received signal is the multiplication of the multipath-channel transfer function by the Fourier transform of the input signal, any frequency components of the input signal near this zero are severely attenuated, and the signal is distorted. These zeros in propagation-channel transfer functions are often referred to as nulls or fades.

The multipath-channel analysis allows us to identify a fundamental tradeoff between signal bandwidth and channel delay spread. Consider the signal whose Fourier transform is shown as the green line in Figure 28. This signal has no significant energy for frequencies near the fade, and so the multipath channel does not adversely affect it. However, the red signal has much wider bandwidth (its Fourier transform extends further from zero in frequency), and has significant energy in the vicinity of the fade. The red signal will therefore experience distortion from the multipath fade.

In general, to avoid multipath distortion, the input-signal bandwidth needs to be smaller than the reciprocal of the delay spread. Since narrow bandwidth corresponds to long time duration, this can be interpreted as saying that the input signal requires sending long symbols, where long is with reference to the delay spread. If the temporal length of the symbols is much longer than the delay spread, then each symbol is only affected in a minor way. If the symbols are short in time (large signal bandwidth), then the delays in the channel will cause adjacent symbols to interfere with each other, in turn causing the receiver to make symbol errors.

We take a first look at non-ideal filters in this SPTK post on linear time-invariant systems that are described by simple first- and second-order differential equations.

Significance of Ideal Filters in CSP

Ideal filters are sometimes used in digital signal processing because it is easy to create the discrete-time/discrete-frequency transfer functions (rectangles!) and easy and cheap to apply them in the frequency domain using the FFT and multiplication (convolution theorem).

The moving-average filter is typically used in the frequency-smoothing method of spectral correlation function estimation due to its simplicity, effectiveness, and low cost. We’ll look at the moving-average filter next time in the SPTK series.

Previous SPTK Post: Convolution       Next SPTK Post: The Moving-Average Filter

Author: Chad Spooner

I'm a signal processing researcher specializing in cyclostationary signal processing (CSP) for communication signals. I hope to use this blog to help others with their cyclo-projects and to learn more about how CSP is being used and extended worldwide.

Leave a Comment, Ask a Question, or Point out an Error