SPTK: The Matched Filter

Matchmaker, Matchmaker,
Make me a match,
Find me a find,
catch me a catch!
–“Matchmaker” from Fiddler on the Roof

Previous SPTK Post: The Characteristic Function Next SPTK Post: Wavelets

In this post, we take a look at a special linear time-invariant system called the matched filter. It is used to detect the presence of a known signal. In practice, it is often applied to the detection of a periodically repeated known portion of a communication signal, such as a channel-estimation frame or frequency-correction burst. It is also widely used in the detection of radar pulses, where the matched-filtering operation is renamed to pulse compression. Filtering, which implies a convolution operation, and correlation are nearly the same thing. Therefore applying a matched filter is sometimes referred to as applying a correlator.

Jump straight to the Significance of the Matched Filter in CSP.

Introduction

In many signal-processing situations it is desired to detect the presence of a known signal in noise–is the signal there or not? The crucial word in that last sentence is known. When we say known, we mean that we can write down the exact signal over some interval of length T. If the data that we process does not contain, over some interval of length T, our signal, then the signal is not actually known. This can happen if the transmitted communication or radar signal does indeed contain the known signal, but somewhere between the transmitter and the receiver the signal becomes corrupted by linear or nonlinear effects such as multipath distortion, frequency shifting, the general Doppler effect, receiver impairments, sampling-rate error, etc.

We admit that we don’t know the starting point of the known signal-to-be-detected, and we can accept that it is subject to additive stationary noise. Moreover, we can accept an unknown complex scale factor to account for unknown signal power and unknown carrier phase.

Let s(t) denote the perfectly known signal of interest, and let x(t) denote the data to be processed by our detector. The length of s(t) is T and the length of x(t) is T_d \ge T. We can represent our data by

\displaystyle x(t) = As(t-t_d)\mbox{\rm rect}\left(\frac{t-t_d}{T}\right) + n(t), \ \ \ t \in [0, T_d], \hfill (1)

where \displaystyle \mbox{\rm rect}(t) is the unit-height and unit-width rectangle function centered at t=0. I’m writing it this way, which I’ll simplify later, to emphasize the point that the matched filter will not only be a good detector of s(t), but it will also serve to estimate t_d, thereby localizing the occurrence of our known signal in time.

All filters are completely characterized by their impulse-response function, which is the output of the filter for an impulse input applied at time t=0. All other possible outputs–no matter the input–can be calculated based on the input and the impulse-response function.

Denoting a generic impulse-response function with the usual symbol of h(t) and the filter input by x(t), the filter output is given by the convolution of the impulse response with the input,

\displaystyle y(t) = \int_{-\infty}^\infty x(u) h(t-u)\, du \hfill (2)

\displaystyle = \int_{-\infty}^\infty x(t-u)h(u)\, du. \hfill (3)

Also recall that the frequency response of a linear time-invariant system is the Fourier transform of the impulse response, and characterizes the ability of the filter to modify (pass undisturbed, attenuate, amplify, phase-shift) individual frequency components of the input signal. The frequency response is often called the transfer function,

\displaystyle H(f) = {\cal{F}} \left[ h(t) \right] = \int_{-\infty}^\infty h(t) e^{-i2\pi f t}\, dt. \hfill (4)

Derivation of the Matched Filter

Let’s consider the simplified continuous-time data model of x(t)

\displaystyle x(t) = \left\{ \begin{array}{ll} s(t) + n(t) & \mbox{\rm \ on\ } H_1 \\ n(t) & \mbox{\rm \ on\ } H_0 \end{array} \right. .\hfill (5)

The known signal is s(t) and the noise-plus-interference is n(t). The Fourier transform of the signal is S(f) and the power spectrum of the random noise/interference is S_n(f), which is assumed to never be zero within the support of S(f). This last assumption will certainly hold if there is a component to n(t) that is white noise, and we will specialize our final result to the case where n(t) is indeed nothing but white Gaussian noise.

Our question is this: What linear time-invariant system (filter) is optimal for presence detection of s(t)? We seek the impulse-response function of a filter such that when the filter output exceeds some threshold, we declare the presence of the signal in some “best” way. As with most optimality-based signal-processing approaches, there are different options for defining “optimal.” What sense of optimality will we use here?

The output of our filter is denoted by y(t),

\displaystyle y(t) = x(t) \otimes h(t) = \int_{-\infty}^\infty x(u) h(t-u)\, du. \hfill (6)

Let’s consider the output at time t=t_0 > 0, y(t_0). We want this number to have a large magnitude on hypothesis H_1 and a small one on H_0. And we want that to be true on the average. Thinking of the tractability of optimization problems that involve quadratic functions of data, we can consider forming an SNR of sorts. The mean-square value of y(t_0) (remember s(t) is non-random, but n(t) is a random process here, so that x(t) is random),

\displaystyle E\left[ |y(t_0)|^2 \right] = \left\{ \begin{array}{ll} E\left[ |h(t) \otimes (s(t)+n(t))|_{t=t_0}^2 \right], & H_1 \\ \ & \ \\ E\left[|h(t)\otimes n(t)|_{t=t_0}^2 \right], & H_0. \end{array} \right. \hfill (7)

We can define the SNR S by the ratio of the mean-square values of y(t_0) on the two hypotheses,

\displaystyle S = \frac{E\left[ \left| y(t_0) \right|_{H_1}^2 \right]}{E\left[ \left| y(t_0) \right|_{H_0}^2 \right]}. \hfill (8)

If the signal is absent (H_0), then the denominator of S favors making y(t_0) small. If the signal is present (H_1), the numerator favors making y(t_0) large. So maximizing the SNR measure S is a way to find the “best” filter for detecting the presence of s(t). Let’s start with the denominator of S.

SNR Denominator

Let’s attempt to straightforwardly evaluate the expectation. Unless otherwise specified, all integrals have limits of \pm \infty.

\displaystyle \left|y(t_0)\right|_{H_0}^2 = \left[ \vphantom{\int} y(t_0) y^*(t_0) \right]_{H_0} \hfill (9)

\displaystyle = \left[ \int h(u) n(t_0-u)\, du \left[ \int h(v) n(t_0-v)\, dv\right]^* \right] \hfill (10)

\displaystyle = \int\int h(u)h^*(v) n(t_0-u) n^*(t_0-v) \, du dv. \hfill (11)

Applying the expectation yields a relatively simple expression,

\displaystyle E\left[ \vphantom{\int} \left|y(t_0)\right|_{H_0}^2 \right] = \int \int h(u)h^*(v) E\left[ n(t_0-u) n^*(t_0-v) \right] \, du dv. \hfill (12)

The expectation is the autocorrelation of the noise process,

\displaystyle R_n(t_1, t_2) = E\left[ n(t_1)n^*(t_2) \right], \hfill (13)

and assuming that the noise is stationary of order two, the autocorrelation is a function of the difference between the two times,

\displaystyle R_n(t_1, t_2) = R_n(t_1 - t_2), \hfill (14)

so that our expectation is given by

\displaystyle E\left[ n(t_0-u) n^*(t_0-v) \right] = R_n(v-u). \hfill (15)

Our SNR denominator then becomes a bit more familiar,

\displaystyle E\left[ \vphantom{\int} \left|y(t_0)\right|_{H_0}^2 \right] = \int\int h(u)h^*(v) R_n(v-u) \, du dv. \hfill (16)

Let’s express this in the frequency domain, realizing that the noise autocorrelation R_n(\tau) is the inverse Fourier transform of the noise spectral density S_n(f),

\displaystyle R_n(\tau) = {\cal{F}}^{-1} \left [ S_n(f) \right] = \int S_n(f) e^{i 2 \pi f \tau} \, df. \hfill (17)

This basic relationship allows us to express the noise autocorrelation appearing in (16) as

\displaystyle R_n(v-u) = \int S_n(f) e^{i 2 \pi f (v-u)} \, df, \hfill (18)

and so the SNR denominator is

\displaystyle E\left[ \vphantom{\int} \left|y(t_0)\right|_{H_0}^2 \right] = \int\int h(u)h^*(v) \int S_n(f) e^{i2\pi f (v-u)} \, df \, du dv \hfill (19)

\displaystyle = \int S_n(f) \left[ \int h(u) e^{-i2\pi f u}\, du \int h^*(v) e^{i2\pi f v}\, dv \right] \, df \hfill (20)

\displaystyle = \int S_n(f) A(f) B(f) \, df. \hfill (21)

Now A(f) = H(f), and B(f) is easy to evaluate too,

\displaystyle B(f) = \left[ \int h(v) e^{-i2\pi f v} \, dv \right]^* = H^*(f). \hfill (22)

The denominator is then

\displaystyle E\left[ \vphantom{\int} \left|y(t_0)\right|_{H_0}^2 \right] = \int S_n(f) |H(f)|^2 \, df. \hfill (23)

We could have gotten there a little quicker by using results from other SPTK posts, but this was a good refresher, and a good warm-up for the numerator.

SNR Numerator

The numerator is a bit more complex than the denominator because it involves both the signal and the noise/interference. Let’s start by writing it out,

\displaystyle E\left[ \vphantom{\int} \left|y(t_0)\right|_{H_1}^2 \right] = E \left[ \int h(u) (s(t_0-u)+n(t_0-u)) \, du \int h^*(v)(s^*(t_0-v) + n^*(t_0-v)) \, dv \right]. \hfill (24)

Now the signal, being known, is non-random, and let’s assume that the mean value of the noise/interference is zero for all t,

\displaystyle E[n(t)] = 0, \ \ \ \forall t. \hfill (25)

(Are you keeping track of the assumptions we’re making about n(t)?) Armed with this information, we can expand the product in (24) and apply some of the expectations,

\displaystyle E\left[ \vphantom{\int} \left|y(t_0)\right|_{H_1}^2 \right] = \int\int h(u)h^*(v) s(t_0-u)s^*(t_0-v) \, dudv

\displaystyle + \int\int h(u)s(t_0-u)E[n^*(t_0-v)] h^*(v) \, dudv

\displaystyle + \int\int h(u) E[n(t_0-u)] h^*(v) s^*(t_0-v) \, dudv

\displaystyle + \int\int h(u)h^*(v) E[n(t_0-u)n^*(t_0-v)] \, dudv. \hfill (26)

The second and third terms are zero due to our assumption of a zero-mean n(t). The fourth term is identical to the denominator, which we know is given by (23). So our numerator work focuses on the first term, which we’ll call C.

\displaystyle C = \int h(u) s(t_0-u)\, du \int h^*(v) s^*(t_0-v) \, dv = C_1 C_2. \hfill (27)

Let’s recall Parseval’s relation, which for integrals says that the integral of a product of functions is equal to the integral of the product of their transforms,

\displaystyle \int a(t) b^*(t) \, dt = \int A(f) B^*(f) \, df. \hfill (28)

Let’s apply this to C_1 and C_2 in turn. First we need to put the integral into the proper a(t)b^*(t) form:

\displaystyle C_1 = \int h(u) s(t_0-u) \, du = \int h(u) \left[ s^*(t_0-u) \right]^* \, dt, \hfill (29)

which means we need the Fourier transform of h(u) and of s^*(t_0-u). The former is easy: h(u) \Longleftrightarrow H(f). The latter requires some care,

\displaystyle {\cal F} \left[ s^*(t_0 -u) \right] = \int s^*(t_0-u)e^{-i2\pi f u} \, du = \left[ \int s(t_0-u) e^{i2\pi f u} \, du \right]^*. \hfill (30)

If we perform a substitution for the variable of integration u equal to v=t_0-u \Rightarrow du = -dv, we obtain

\displaystyle {\cal F} \left[ s^*(t_0 -u) \right] = \left[ -\int s(v) e^{i2\pi f (t_0-v)} \, dv \right]^* \hfill (31)

\displaystyle = -e^{-i2\pi f t_0} \left[ \int s(v) e^{-i2\pi f v}\, dv \right]^* \hfill (32)

\displaystyle = -e^{-i2\pi f t_0} S^*(f). \hfill (33)

Now we have a frequency-domain expression for C_1,

\displaystyle C_1 = \int H(f) \left[ -e^{-i2\pi f t_0} S^*(f) \right]^* \, df \hfill (34)

\displaystyle = -\int H(f) S(f) e^{i 2 \pi f t_0} \, df. \hfill (35)

The term C_2 is closely related to C_1:

\displaystyle C_2 = \int h^*(v) s^*(t_0-v) \, dv \hfill (36)

\displaystyle = \left[ \int h(v) s(t_0-v) \, dv \right]^* = C_1^*. \hfill (37)

Finally, then, we have our term C,

\displaystyle C = C_1C_2 = \left| \int H(f) S(f) e^{i2\pi f t_0} \, df \right|^2, \hfill (38)

which means we have a full frequency-domain expression for the SNR S,

\displaystyle S = \frac{E\left[ \left| y(t_0) \right|_{H_1}^2 \right]}{E\left[ \left| y(t_0) \right|_{H_0}^2 \right]} = \frac{\left| \int H(f) S(f) e^{i2 \pi f t_0} \, df \right|^2 + \int S_n(f) |H(f)|^2 \, df}{\int S_n(f) |H(f)|^2\, df} \hfill (39)

\displaystyle = 1 + \frac{\left| \int H(f) S(f) e^{i2\pi f t_0} \, df \right|^2}{\int S_n(f) |H(f)|^2\, df}.  \hfill (40)

The Matched Filter and the Maximum SNR

The task now is to find the particular H(f) that maximizes the SNR S expressed in (40). Often this kind of maximization involving integrals is difficult and solution techniques are complex. But here we have a bit of a shortcut: the Cauchy-Schwarz inequality.

First, notice that the second term in (40) cannot be negative, so that maximizing S can be done by maximizing that term itself. Our optimization problem is then

\displaystyle \max_{H(f)} \frac{\left| \int H(f) S(f) e^{i2\pi f t_0} \, df \right|^2}{\int S_n(f) |H(f)|^2\, df}. \hfill (41)

Here is where the inequality comes to our rescue. The general form of the inequality (for integrals) is

\displaystyle \left| \int A(f) B^*(f) \, df \right|^2 \leq \int \left| A(f) \right|^2 \, df \, \int \left| B(f) \right|^2 \, df, \hfill (42)

with equality achieved when A(f) is proportional to B(f) (A(f) = cB(f)).

Let’s reexpress the quotient to make application of the inequality fruitful. In particular, reexpress the numerator so that after application of the inequality, the denominator cancels. We have the reexpression

\displaystyle \left| \int H(f) S(f) e^{i2\pi f t_0} \, df \right|^2 = \left| \int \left(H(f) \sqrt{S_n(f)}\right) \left(\frac{S(f)}{\sqrt{S_n(f)}} e^{i2\pi f t_0}\right) \, df \right|^2, \hfill (43)

provided that \displaystyle S_n(f) is never equal to zero (remember?).

Now we can identify A(f) and B(f) in (42):

\displaystyle A(f) = H(f) \sqrt{S_n(f)} \hfill (44)

\displaystyle B(f) = \left[ \frac{S(f)}{\sqrt{S_n(f)}} e^{i2\pi f t_0} \right]^*. \hfill (45)

Applying the inequality is now straightforward,

\displaystyle \left| \int H(f) S(f) e^{i2\pi f t_0} \, df \right|^2 = \left| \int \left[ H(f) \sqrt{S_n(f)} \right] \left[ \frac{S^*(f)}{\sqrt{S_n(f)}}e^{-i2\pi ft_0} \right]^* \, df \right|^2 \hfill (46)

\displaystyle \leq \int \left| H(f) \right|^2 \, df \int \frac{|S(f)|^2}{S_n(f)} \, df, \hfill (47)

with equality when

\displaystyle H(f) \sqrt{S_n(f)} = c \frac{S^*(f) e^{-i 2 \pi f t_0}}{\sqrt{S_n(f)}}, \hfill (48)

or

\displaystyle H(f) = c \frac{S^*(f) e^{-i 2 \pi f t_0}}{S_n(f)}, \hfill (49)

which is the matched filter.

It is easy to compute the maximum value of the SNR S as

\displaystyle S = 1 + \int \frac{|S(f)|^2}{S_n(f)} \, df. \hfill (50)

In the following, we’ll simplify our development and take the proportionality constant c to be unity. So the general matched filter–the linear time-invariant system that maximizes output SNR for the detection of a known (non-random) signal in random stationary noise is given by

\displaystyle H(f) = \frac{S^*(f) e^{-i 2 \pi f t_0}}{S_n(f)}. \hfill (51)

The matched filter is matched to the known signal s(t) because of the numerator of H(f)–the input frequency components corresponding to those of the signal s(t) are weighted by the strengths of those components–stronger parts of s(t) count more.

The matched filter is also matched to the noise/interference through the denominator of H(f)–for those frequency components of the noise that are strong, the filter magnitude is small, and for noise frequency components that are weak, the filter magnitude is large. This effectively suppresses the stronger aspects of the noise, and is similar to the concept of spectral whitening we have encountered previously.

To find an explicit expression for the matched-filter impulse-response function h(t), we’d need to inverse Fourier transform H(f) in (51). We could try to invoke the convolution theorem, identifying the two transforms in the product as S(f) and 1/S_n(f), but we’d make little progress toward an enlightening expression. We could also try different choices for S_n(f), and that would probably be fruitful in proportion to the realism and utility of the particular choices.

We can also consider the common case of white Gaussian noise, where S_n(f) = N_0, a constant over all frequencies. In this case, the matched-filter transfer function is given by

\displaystyle H(f) = \frac{S^*(f) e^{-i 2 \pi f t_0}}{N_0}. \hfill (52)

In this case, we can easily perform the inverse transform,

\displaystyle h(t) = \frac{1}{N_0} \int S^*(f) e^{-i2\pi f t_0} e^{i2\pi f t} \, df \hfill (53)

\displaystyle = \frac{1}{N_0} \int S^*(f) e^{i2 \pi f (t_0-t)} \, df \hfill (54)

\displaystyle = \frac{1}{N_0} \left[ \int S(f) e^{i2\pi f (t0-t)} \, df \right]^* \hfill (55)

\displaystyle = \frac{1}{N_0} s^*(t_0-t). \hfill (56)

The matched-filter impulse response for known signal s(t) is the conjugated and time-reversed signal itself.

Recalling the expression for the filter output y(t), we have the following sequence of equations that show the connection of matched-filtering to correlation,

\displaystyle y(t) = h(t) \otimes x(t) \hfill (57)

\displaystyle = \int h(t-u) x(u)\, du \hfill (58)

\displaystyle = \frac{1}{N_0} \int s^*(t_0-t+u) x(u)\, du \hfill (59)

\displaystyle = \frac{1}{N_0} \int x(u) s^*(u -t + t_0) \, du. \hfill (60)

Neglecting the noise component of x(u) leads to

\displaystyle y(t) = \frac{1}{N_0} \int s(u) s^*(u-t+t_0) \, du, \hfill (61)

which is recognized as (proportional to) the autocorrelation of the signal s(t) evaluated at lag (delay) t_0-t. Therefore, matched filtering is equivalent to correlation in the white-noise case.

Degradation of “Known”

If your knowledge of the signal is imperfect, or if the transmitted signal undergoes a distortion or transformation that significantly changes it, then the impulse response of the matched filter will not actually be matched to the signal in the data. I stress that this can be the case even if your knowledge of the signal as it exits its transmit antenna is perfect. The propagation channel and your receiving equipment can both impart significant distortion or mathematical changes to the signal in the data you actually process. Let’s look at a couple examples.

Frequency Offset

A frequency offset (or carrier-frequency offset, CFO) occurs when a signal is received but is not perfectly downconverted to zero frequency. The matched filter assumes that you have, indeed, perfectly downconverted your signal, and so there is a mismatch. In terms of the matched-filter output (61), which neglects the noise term, we have the actual filter output of

\displaystyle y(t) = \frac{1}{N_0} \int \left[s(u)e^{i2\pi f_o t + i \phi_0} \right] s^*(u-t+t_0) \, du \hfill (62)

\displaystyle = \frac{1}{N_0} \int s(u)s^*(u-t+t_0) e^{i2\pi f_0 t + i \phi_0} \, du, \hfill (63)

which looks like the non-conjugate cyclic autocorrelation for s(u). However, the CFO f_0 is typically small relative to the smallest cycle frequency for s(u), so that this output is simply a degraded version of the desired matched-filter output. We’ll see a numerical example below.

To compensate, you could multiply the processed data by a complex sine wave, compute the matched-filter maximum, and repeat for a set of densely spaced sine-wave frequencies. Presumably the sine-wave frequency in this search grid that is closest to the actual unknown CFO f_0 would yield a maximum near the ideal matched-filter output. However, this extra brute-force CFO-compensation search will be more computationally expensive than a simple application of the matched filter, rendering it less attractive as a detection method.

Sampling-Rate Offset

The mathematics of the matched filter are presented in continuous time in this post, but when we apply a matched filter–or any filter–we are most often working in discrete time. A sampling-rate offset happens when you ask a system to provide samples at rate f_s = 1/T_s, but the samples are actually produced at a different rate f_s / \eta .

Instead of obtaining the sampled signal x(0), x(T_s), x(2T_s), \ldots, x(kT_s) we obtain x(0), x(\eta T_s), x(2\eta T_s), \ldots, x(k\eta T_s). Suppose we warp the time axis to obtain

\displaystyle z(t) = x(\eta t), \hfill (64)

and then sample z(t) at sample rate f_s. Then we’d obtain samples z(0), z(T_s), z(2T_s), \ldots, z(kT_s) which are equal to x(0), x(\eta T_s), x(2\eta T_s), \ldots, x(k\eta T_s). Therefore, in continuous time, the sampling-rate offset is represented by a time-warped signal. Our matched-filter expression (61) becomes

\displaystyle y(t) = \frac{1}{N_0} \int s(\eta u) s^*(u-t+t_0) \, du. \hfill (65)

For values of \eta that are not close to unity, you can see that this new cross correlation will be smaller than the desired correlation in (61). A numerical example is shown below.

Linear Distortion

If the signal undergoes significant linear distortion en route to the receiver, or in the receiver, the known signal component is no longer fully known unless the impulse response for the distortive channel is also known. That is, if the channel has impulse-response function h_c(t), then the received signal is

\displaystyle s_r(t) = h_c(t) \otimes s(t). \hfill (66)

If we use the matched filter for s(t) to detect this distorted signal, the filter is not actually matched to the distorted signal, and is therefore no longer optimal. The decrease in the matched-filter peak will depend on the details of the distortion impulse-response function. A numerical example is provided below.

Examples

Let’s illustrate the power and the weaknesses of the matched-filter detector using signals of interest to us here at the CSP Blog. We start with broadcast TV.

ATSC DTV

The 8VSB/16VSB version of the North American broadcast digital television signal has significant built-in reception aids. It features a pilot tone at the left edge of its band, which accounts for about 10% of the total signal power, and also a periodically repeated known (to all DTV receivers) synchronization sequence (My Papers [16]). The sequence is 832 symbols long and repeats every 24 ms.

We can illustrate matched filtering by constructing the matched-filter impulse response from the modulated repeated sequence and then convolving that impulse response with a lengthy version of the signal, which contains several instances of the sequence. The result is shown in Figure 1, where the prominent matched-filter peaks are indeed separated by 24 ms. The action of the matched filter here can inform the receiver about exactly when each instance of the sequence appears. The receiver can then use the extracted sequences for timing recovery or channel estimation, if the propagation channel is not so severe that it erases the matched-filter peaks (see below).

Figure 1. PSD and matched-filter output for a simulated ATSC DTV signal. The signal standard specifies that an 832-symbol sequence, known to all DTV receivers, be transmitted every 24 ms.

The existence of a small carrier-frequency offset (CFO) in the received data that is not accounted for in the matched-filter impulse response can severely degrade the matched-filter output. Figure 2 shows the PSD and matched-filter output for a received signal that has a CFO of about 0.0015f_s. Visually compare the PSDs in Figures 1 and 2–this is a small CFO. However, the peaks have vanished, indicating that the match between the normal matched-filter impulse response and the actual received signal is poor. In the radar-signal context, such CFOs are actually caused by the Doppler effect, and estimating them is the key to estimating the relative velocity between the radar and the target. But usually in communications, such CFOs are a nuisance.

Figure 2. PSD and matched-filter output for a simulated ATSC DTV signal. The signal standard specifies that an 832-symbol sequence, known to all DTV receivers, be transmitted every 24 ms. Here the received signal was imperfectly frequency shifted to complex baseband (zero frequency) and so possesses a small carrier-frequency offset, which destroys the matched-filter output peaks.

Next consider the case in which the receiver’s analog-to-digital converter delivers a sampling rate that is not as commanded. This is typically referred to as a sampling-rate offset (SRO). Figure 3 shows the PSD and matched-filter output for a signal with a non-zero SRO. To be clear, as in the nominal and CFO cases above, the matched-filter impulse response is for the ideal signal. Here the actual sampling rate is 101/100 times the desired rate.

Figure 3. PSD and matched-filter output for a simulated ATSC DTV signal. The signal standard specifies that an 832-symbol sequence, known to all DTV receivers, be transmitted every 24 ms. Here the sampling rate at the receiver is not as specified and so the matched-filter impulse response is not sampled at the same rate as the processed data, resulting in a substantial degradation in the matched-filter peaks.

Finally, consider the case where a frequency-selective propagation channel exists between the ATSC DTV transmitter and the receiver. This is common–see my post on ATSC DTV for more real-world examples. Figure 4 shows the filtered signal and the corresponding matched-filter output.

Figure 4. PSD and matched-filter output for a simulated ATSC DTV signal. The signal standard specifies that an 832-symbol sequence, known to all DTV receivers, be transmitted every 24 ms.

A Sine Wave

This example will be somewhat tangential–I attempt to relate the matched-filtering concept to more general estimation ideas.

Consider the problem of estimating the frequency of a noisy sine wave with unknown complex amplitude A = A_0 e^{i\phi_0}, with A_0 = |A|, and unknown frequency f_0,

\displaystyle x(t) = Ae^{i 2 \pi f_0 t} + n(t). \hfill (67)

Assume that we have some samples of x(t), and that the magnitude, frequency, and phase parameters are real-valued. A classic paper from the 70s (The Literature [R201]) shows that an optimal estimator of f_0 can be had by maximizing the (square-root of the) periodogram,

\displaystyle \hat{f}_0 = \mbox{\rm argmax}_f \left| X_T(f)\right| \hfill (68)

\displaystyle = \mbox{\rm argmax}_f \left| \sum_{t=0}^{N-1} x(t)e^{-i2\pi f t} \right|, \hfill (69)

where again, I remind you, the spectral frequency f is continuous (a real number).

To implement (69) would require a search over all real numbers, or perhaps over all real numbers in some bounded set such as [f_1, f_2] with f_2 > f_1 real numbers. Not gonna happen!

But notice that the sum in (69) is essentially a matched filter for a sine wave with frequency f (proof left as an exercise for the reader). So the optimal estimator (according to [R201]) consists of trying all possible sine-wave matched filters, and picking as the frequency estimate the frequency that produces the largest matched-filter output. Satisfying!

Now the sum in (69) is just the discrete-time continuous-frequency Fourier transform. When restricted to an evenly spaced set of T frequencies, it becomes the discrete Fourier transform, and the DFT, in turn, is efficiently computed by the fast Fourier transform (FFT). This is fast, and indeed remains optimal if the sine-wave frequency happens to be equal to one of the FFT frequencies. But that is almost never the case.

The full estimator in [R201] starts with maximizing the DFT/FFT version of (69), which produces an initial coarse estimate constrained to be one of the FFT frequencies. Then a restricted-domain brute-force fine search is used, based on restricting the frequencies used in (69) to lie near the coarse estimate.

Linear FM Pulsed Radar

In pulsed radar systems, the radar transmitter sends out a sequence of high-power pulses with significant gaps between them. During the gaps, the radar receiver can listen for echoes.

A typical pulse is the linear FM pulse, commonly called a chirp. The radar signal itself can be modeled as a carrier-modulated pulse train,

\displaystyle r(t) = \sum_k p_r(t - kT_{pri} - t_0) e^{i2\pi f_c t + i\phi}, \hfill (70)

where each pulse is a chirp,

\displaystyle p_r(t) = A e^{i 2\pi B t^2} \mbox{\rm rect}\left( \frac{t-T_{pulse}/2}{T_{pulse}} \right). \hfill (71)

In such radar signals, the pulses are repeated every T_{pri} seconds, which is the pulse-repetition interval, and have widths T_{pulse} seconds, with T_{pulse} \leq T_{pri}.

Because the radar’s target is typically quite far away, and the radar signal has to travel to and from the target, the reflected radar echo at the receiver is usually quite weak. But here comes the power of the matched filter! The chirp has a very narrow autocorrelation function, and you can control the pulse width as well as the rate at which the chirp frequency varies (through B). Using the known transmitted chirp, we see in Figure 5 that the echoes are easily detected even when the radar pulses are buried in receiver noise.

Figure 5. Illustration of the power of matched filtering for radar (where the matched filtering is often referred to as pulse compression).

Significance of the Matched Filter in CSP

I’d like to emphasize the connection between matched filtering and correlation (see (61)). Once you realize that filtering is strongly connected to cross correlation, you begin to interpret correlation as filtering. And correlation is king in CSP–the autocorrelation, the cyclic autocorrelation, the power spectrum, and spectral correlation are all forms of correlation, and so we can think of them as filtering.

More specifically, consider CSP algorithms such as the single-cycle detector. Such detectors can be derived by posing optimization problems, such as ‘find the quadratic functional of a signal that produces the maximum-SNR sine wave with frequency \alpha.’ Recall one expression for the cycle detector is given by

\displaystyle Y_{OSD} \propto \left| \int \hat{S}_x^\alpha (f) S_s^\alpha(f)^* \, df. \right|

where \displaystyle \hat{S}_x^\alpha(f) is an estimate of the spectral correlation function for x(t) = s(t) + n(t) + i(t), and \displaystyle S_s^\alpha(f) is the theoretical (known!) spectral correlation function for s(t). It is reasonable to model the estimate as

\displaystyle \hat{S}_x^\alpha (f) = S_s^\alpha(f) + w(f),

where w(f) is measurement noise. Then the integral inside the cycle detector is

\displaystyle \int \left[ S_s^\alpha(f) + w(f) \right] S_s^\alpha(f)^* \, df,

which looks a lot like a frequency-domain matched filter for a known signal S_s^\alpha(f).

A maximum-likelihood analysis of a weak-signal two-sensor detector in My Papers [4] reveals a plethora of matched-filter-like structures all working together to perform detection:

Template-matching styles of image or signal classification can also be interpreted as correlations and matched filtering. For example, the spectral-correlation-matching and cyclic-cumulant-matching subalgorithms in My Papers [25,26,28] have at their heart a correlation between a theoretical probabilistic parameter and a noisy measurement thereof.

As usual, comments, questions, and my errors are welcome in the Comments section below.

Previous SPTK Post: The Characteristic Function Next SPTK Post: Wavelets

Author: Chad Spooner

I'm a signal processing researcher specializing in cyclostationary signal processing (CSP) for communication signals. I hope to use this blog to help others with their cyclo-projects and to learn more about how CSP is being used and extended worldwide.

2 thoughts on “SPTK: The Matched Filter”

  1. Another outstanding article Chad. Thank you!

    Equation (26) line 3 is missing h(u) inside the double integral.

    In your ATSC example, you chose T_d strictly greater than T basically through periodization (every 24ms). What can we say about the performance of the matched filter when T_d = T versus T_d > T ?

    Regards,
    Mansoor

    1. I think the peak SNR represented by (50) does not change if T = T_d, which condition just means that the known signal is completely contained in the processed data, and that the processed data is exactly as long as the known signal.

      When T_d > T, as you note, you get the situation I’ve depicted in the ATSC examples: lots of low-magnitude matched-filter outputs and peaks wherever the known signal appears.

      For $latex T_d < T$, only a portion of the known signal can be contained in the processed data of length $latex T_d$. In this case, I think you can consider that the matched filter for the full known signal is equal to the matched filter for the partial known signal concatenated with an irrelevant sequence, which I believe renders it suboptimal, strictly speaking. As an example, if the ATSC DTV signal is delayed so that the first instance of the known signal component is truncated, the output of the matched filter (matched to the full signal component) is decreased in magnitude equal to the fraction of the known signal that does not appear. Here only half of the known component appears at the very beginning of the data record:

Leave a Comment, Ask a Question, or Point out an Error

Discover more from Cyclostationary Signal Processing

Subscribe now to keep reading and get access to the full archive.

Continue reading