SPTK: Digital Filters

A look at general linear time-invariant filtering in the discrete-time domain.

Previous SPTK Post: The Z Transform   Next SPTK Post: IQ Data

Linear shift-invariant systems are often called digital filters when they are designed objects as opposed to found objects, which are models, really, of systems occurring in the natural world. A basic goal of digital filtering is to perform the same kind of function as does an analog filter, but it is used after sampling rather than before. In some cases, the digitally filtered signal is then converted to an analog signal. These ideas are illustrated in Figure 1.

Figure 1. A typical role for a linear shift-invariant system, or digital filter, in signal processing.

What is a good starting point for a general look at digital filters? Analog filters are typically characterized by differential equations. As before we can start with these systems and discretize them to find general discrete-time equations relating linear shift-invariant system inputs to outputs. We will see something intuitively pleasing: the output at any sampling time is equal to the sum of linear combinations of the previous values of input and output.

Jump straight to the Significance of Digital Filters in CSP

General Digital Filters

A general linear nth-order continuous-time system with input x(t) and output y(t) is described by the differential equation (see also the SPTK post on the Z transform)

\displaystyle \sum_{j=0}^N B_j \frac{d^j}{dt^j} y(t) + \sum_{l=0}^N A_l \frac{d^l}{dt^l} x(t) = 0, \hfill (1)

where {d^l}/{dt^l} = x^{(l)}(t) denotes the lth derivative with respect to time t, and the zeroth derivatives are equal to the signals themselves,

\displaystyle \frac{d^0}{dt^0} y(t) = y(t) \ \ \ \ \frac{d^0}{dt^0} x(t) = x(t). \hfill (2)

We know how to discretize \frac{d}{dt} x(t) if T_s is sufficiently small because we recall the definition of the derivative. So we have the approximation

\displaystyle \frac{d}{dt} x(t) \approx \frac{x(kT_s) - x((k-1)T_s)}{T_s}, \ \ \ \ t = kT_s, \hfill (3)

but how can we discretize the higher-order derivatives? Let’s take a close look at the second derivative to get started. In continuous time, the derivative is closely approximated by

\displaystyle \frac{d}{dt} x(t) \approx \frac{x(t) - x(t-\Delta)}{\Delta} \hfill (4)

if \Delta is sufficiently small. The second derivative is then approximated by

\displaystyle \frac{d^2}{dt^2} x(t) = \frac{d}{dt} \left[\frac{d}{dt} x(t) \right] \approx \frac{d}{dt} \left[\frac{x(t)}{\Delta}\right] - \frac{d}{dt} \left[\frac{x(t-\Delta)}{\Delta}\right] \hfill (5)

\displaystyle = \left[ \frac{x(t) - x(t-\Delta)}{\Delta^2}\right] - \left[\frac{x(t-\Delta) - x(t-2\Delta)}{\Delta^2}\right] \hfill (6)

\displaystyle = \frac{1}{\Delta^2} \left[ x(t) - 2x(t-\Delta) + x(t-2\Delta)\right]. \hfill (7)

The sampled version of x^{\prime\prime}(t) contains samples of x(t) at times kT_s, (k-1)T_s, and (k-2)T_s. In general, the Nth derivative of x(t) will contain some combination of the values x(kT_s), x((k-1)T_s), \ldots, x(k-N)T_s), and the same is true for the sampled derivatives involving y(t). Therefore we can write the general Nth-order difference equation that corresponds to the general Nth-order differential equation as

\displaystyle \sum_{j=0}^N b_j y((k-j)T_s) + \sum_{l=0}^N a_l x((k-l)T_s) = 0. \hfill (8)

It is customary to extract y(kT_s) from the sum over j and normalize the equation by b_0 to obtain the form

\displaystyle y(kT_s) = \sum_{l=0}^N a_lx((k-l)T_s) - \sum_{j=1}^N b_j y((k-j)T_s), \hfill (9)

where the coefficients in (9) are not the same as those in (8) unless b_0 happened to be unity. This form, only slightly modified, is what MATLAB implements for you with its filter.m (see Figure 2).

Figure 2. The top of the help message for MATLAB’s filter.m. This form is slightly more general than (9) because the maximum delays for the input signal x(k) and the output signal y(k) are denoted by distinct variables. However, this is easily accommodated by (9) by selecting N=max(na, nb) and then setting the appropriate number of either b(n) or a(n) equal to zero.

For the special case for which b_j = 0 for j=1, \ldots, N we have

\displaystyle y(kT_s) = \sum_{l=0}^N a_l x((k-l)T_s), \hfill (10)

and the output at time kT_s is see to be a weighted sum of the previous N-1 input values plus a weighted version of the current input value. This kind of digital filter is call non-recursive or finite-impulse response (FIR). The latter appellation is appropriate because the filter’s impulse response has a finite extent in time,

\displaystyle h(kT_s) = y(kT_s) \mbox{\rm \ if \ } x(kT_s) = \delta(kT_s) \hfill

\displaystyle = \sum_{l=0}^N a_l \delta(kT_s - lT_s) \hfill

\displaystyle = a_k \sum_{l=0}^N \delta(kT_s - lT_s). \hfill (11)

Since \delta(kT_s) = 0 unless k=0, in which case it is equal to one, we have

\displaystyle h(kT_s) = \left\{ \begin{array}{ll} a_k, & 0 \leq k \leq N \\ 0, & \mbox{\rm otherwise} \end{array} \right. \hfill (12)

A concrete example of a FIR filter is the moving-average filter.

We can diagram the FIR filter in several ways. In the first method, the successive delays are simply denoted by blocks containing the instruction “Delay T_s“, as in Figure 3.

Figure 3. One way to draw a block diagram for a generic finite-impulse-response (FIR) filter.

Another common way to diagram digital filters is to use the symbol z^{-1} to denote a delay by a single sample. This arises from the Z-transform delay theorem (The Z Transform equation (61)), summarized here as

\displaystyle x(kT_s - T_s) \Longleftrightarrow X(z) z^{-1}. \hfill (13)

The resulting diagrammed FIR filter is shown in Figure 4.

Figure 4. An alternate way of diagramming a finite-impulse-response digital filter. This method takes advantage of the Z-transform delay theorem which tells us that the transform of a signal delayed by one sample is the transform of the undelayed signal multiplied by z^{-1}.

The visualization of the FIR filter in Figure 4 leads to an interpretation of the filter action in terms of tapping off the signal after each delay. The delays all appear in a straight line in the diagram, leading to an alternative name for the structure: a tapped-delay line.

When at least one a_l and one b_j are nonzero, the filter is called recursive or infinite-impulse response (IIR). It is recursive because the current output depends on previous inputs and outputs, unlike the non-recursive FIR filter, which depends only on the current and previous inputs. The general IIR filter is diagrammed using Z-transform delay elements in Figure 5.

Figure 5. A general infinite-impulse-response (recursive) filter diagram. Unlike the non-recursive finite-impulse-response filter, which has output that depends only on past inputs, the output of the recursive filter depends on both past inputs and past outputs.

Let’s take a look at an IIR impulse response for a simple special case in which a_0 \neq 0, a_l = 0 for l > 0, b_1 \neq 0, and b_j = 0 for j >1:

\displaystyle y(kT_s) = a_0 x(kT_s) - b_1 y(kT_s - T_s). \hfill (14)

The output y(kT_s) is equal to the impulse response function for the filter, h(kT_s), when the input is an impulse function, x(kT_s) = \delta(kT_s). Let’s step through a few values of k for this situation.

\mathbf{k=0}

\displaystyle y(0) = a_0 x(0) - b_1 y(-T_s) = h(0) = a_0 \hfill (15)

\mathbf{k=1}

\displaystyle y(T_s) = a_0 x(T_s) - b_1 y(0) = h(T_s) = -b_1 a_0 \hfill (16)

\mathbf{k=2}

\displaystyle y(2T_s) = a_0x(2T_s) - b_1y(T_s) = h(2T_s) = -b_1(-b_1a_0) = b_1^2 a_0. \hfill (17)

For general k then, we see the pattern:

\displaystyle h(kT_s) = \left[ (-1)^k b_1^k a_0 \right] u(k). \hfill (18)

Since we assumed that b_1 \neq 0 and a_0 \neq 0, h(kT_s) will never be zero for any k \ge 0. Therefore, the length (support) of h(kT_s) is indeed infinite.

We now have our basic digital-filter computational structures–FIR and IIR–and a strong connection between digital filters and analog systems that are governed by differential equations (for example, lumped-parameter electronic circuits and many mechanical systems). In this setting, the filter analysis problem is the determination of the transfer function and impulse-response function for a particular given filter. The filter synthesis problem is to choose the values of N, \{a_l\}, and \{b_j\} to achieve a structure that produces a given transfer function or impulse response.

Frequency Response for Digital Filters

Our discretized analog system resulted in the input-output relation given by

\displaystyle y(kT_s) = \sum_{l=0}^N a_l x((k-l)T_s) - \sum_{j=1}^N y((k-j)T_s). \hfill (19)

Does this represent a linear shift-invariant system? To check for linearity, we must compare the outputs for (1) inputs x_1(kT_s) and x_2(kT_s) with (2) the output for Ax_1(kT_s) + Bx_2(kT_s). For time-invariance, we have to compare the outputs for (1) input x(kT_s) with (2) input x((k-D)T_s). I encourage you to try this proof.

Another way to look at it is to simply compute the Z transform of (19) and see if it can be rewritten as Y(z) = H(z) X(z), in which case we know that input x(kT_s) is related to output y(kT_s) by convolution in general.

Let’s proceed by assuming that (19) represents a linear shift-invariant system. Then we can confidently use our normal discrete-time linear-system tools and characterizations, such as impulse response, transfer function, and transforms.

The Z transform (one-sided here) is

\displaystyle X(z) = \sum_{k=0}^\infty x(kT_s) z^{-k}. \hfill (20)

Applying the transform to (19):

\displaystyle Y(z) = \sum_{k=0}^\infty \left[ \sum_{l=0}^N a_l x((k-l)T_s) \right] z^{-k} - \sum_{k=0}^\infty \left[ \sum_{j=1}^N b_j y((k-j)T_s) \right] z^{-k} \hfill (21)

\displaystyle  = \sum_{k=0}^\infty  \sum_{l=0}^N a_l \left[ x((k-l)T_s)  z^{-k} \right] - \sum_{k=0}^\infty  \sum_{j=1}^N b_j \left[ y((k-j)T_s)  z^{-k} \right]. \hfill (22)

Since x((k-D)T_s) \Longleftrightarrow X(z)z^{-D}, this transform simplifies to

\displaystyle Y(z) = \sum_{l=0}^N a_l X(z) z^{-l} - \sum_{j=1}^N b_j Y(z) z^{-j}. \hfill (23)

Rearranging this equation to find Y(z)/X(z) leads to

\displaystyle Y(z)/X(z) = H(z) = \frac{\displaystyle \sum_{l=0}^N a_l z^{-l}}{\displaystyle 1 + \sum_{j=1}^N b_j z^{-j}}. \hfill (24)

Therefore x(kT_s) and y(kT_s) are indeed related by a convolution since Y(z) = H(z) X(z). H(z) in (24) is called the discrete-time transfer function or the digital filter transfer function. It’s inverse Z transform is the discrete-time impulse response for the filter.

The digital filter frequency response is simply the transfer function evaluated on the unit circle in the z plane, which means substitute z = e^{i 2 \pi f T_s} into (24),

\displaystyle H(f) = \left. H(z)\right|_{z=e^{i 2 \pi f T_s}} = \left[ \frac{\displaystyle \sum_{l=0}^N a_l e^{-i 2 \pi f l T_s}}{\displaystyle 1 + \sum_{j=1}^N b_j e^{-i 2 \pi f j T_s}} \right]. \hfill (25)

The frequency response for a digital filter is periodic in frequency f. This is simply due to the periodicity of the complex sine wave,

\displaystyle e^{-i 2 \pi (f + f_p)l T_s} = e^{-i2 \pi f l T_s} e^{-i 2 \pi f_p l T_s} = e^{-i 2\pi f l T_s},

if f_p l T_s = q, where q is any integer. This periodicity requirement is met with f_p = 1/T_s. So, the frequency response of a digital filter is periodic with period equal to the sampling rate.

Examples

The Moving-Average Filter

We’ve encountered the moving-average filter before in the SPTK series of posts. Here we view it in the context of digital filters and also extend it by modulating the impulse response, which has profound implications for the frequency response.

The discrete-time moving-average filter is our general digital-filter structure with all of the feedback (recursive) coefficients b_j = 0 and all of the non-recursive coefficients a_l are equal to some constant M. The input-output relationship is then

\displaystyle y(kT_s) = \sum_{l=0}^N a_l x((k-l)T_s) \hfill (26)

\displaystyle = \sum_{l=0}^N M x((k-l)T_s) \hfill (27)

For example, if M = 1/(N+1), then the filter can be interpreted as forming the average of the current input value together with the past N input values. The Z transform of y(kT_s) follows easily from the Z-transform delay theorem and the fact that the Z transform is linear. The frequency response is that Z transform evaluated at z = e^{i 2 \pi f T_s}. You can verify that the result is

\displaystyle H(f) = \sum_{l=0}^N M e^{-i 2 \pi f l T_s} = M\sum_{l=0}^N e^{-i 2 \pi f l T_s}. \hfill (28)

This formula is, once again, an example of a geometric series. The generic series is

\displaystyle S_n = \sum_{j=0}^{N} a^j, \hfill (29)

for which a simple formula for S_n can be had by looking at the difference S_n - aS_n,

\displaystyle S_n = \frac{1-a^{N+1}}{1-a}, \hfill (30)

which is valid provided a \neq 1. Using this formula with a = e^{-i 2 \pi f T_s} leads to an expression for the frequency response

\displaystyle H(f) = M \left[ \frac{1 - e^{-i2 \pi f T_s(N+1)}}{1 - e^{-i 2 \pi f T_s}} \right]. \hfill (31)

This can be simplfied by some algebra to reveal a sinc-like frequency response,

\displaystyle H(f) = M \left[ \frac{e^{i\pi fT_s(N+1)} - e^{-i\pi fT_s(N+1)}}{e^{i\pi fT_s} - e^{i\pi f T_s}} \right] \frac{e^{-i\pi fT_s(N+1)}}{e^{-i\pi fT_s}} \hfill (32)

\displaystyle = M e^{-i\pi fT_sN} \left[ \frac{\sin(\pi f T_s(N+1)}{\sin(\pi f T_s)} \right]. \hfill (33)

Why do I say “sinc-like” here? Consider large N and small T_s, then for smallish f, the denominator \sin(\pi f T_s) \approx \pi f T_s, and we get our sinc-like behavior. But let’s plot it in Figure 6 for several values of T_s and N to get a more accurate view of this frequency-response function.

Figure 6. The frequency response function (magnitude) for a moving-average digital filter (see (33)). Note that the filter transfer-function shapes are the same for each value of N, but the physical bandwidth of the filter changes with varying T_s.

Even without the ability to nearly effortlessly plot (33), we can get a good idea of its behavior with a little more math. For f=0, we see that both the numerator and denominator of H(f) in (33) are equal to zero, so we have the limiting form of 0/0 as f\rightarrow 0, which is undefined. However, we can use L’Hopital’s rule to evaluate the function at f=0. This requires that we differentiate the numerator and denominator and form the new quotient from those results

\displaystyle \lim_{f\rightarrow 0} H(f) = \lim_{f\rightarrow 0} M\frac{(N+1) \cos(\pi f T_s(N+1))}{\cos(\pi f T_s)}, \hfill (34)

from which we easily see that H(f) \rightarrow M(N+1) as f \rightarrow 0. For our typical moving-average choice of M=1/(N+1), this means that H(0) = 1. This is pleasing because it means that if your input is a constant (a sine wave with frequency equal to zero), then the filter simply passes the signal unchanged. In the case of a constant, the average is always equal to the constant.

Other values of the frequency response (33) can be had using pencil and paper if a numerical value is chosen for N, such as N=8, and then frequencies are chosen to be “friendly” to easy \sin function evaluation, such as f_k = 1/(2^k T_s).

Overall, we see that the moving-average filter is of the lowpass variety in that the frequency response is relatively large for small frequencies |f| \ll 1/(2T_s) and relatively small for large frequencies |f| \approx 1/(2T_s).

A Modulated Moving-Average Filter

The impulse-response function for the moving-average filter, a FIR filter, is particularly easy to compute by simply replacing the input x(kT_s) with an impulse \delta(kT_s),

\displaystyle h_{MA}(kT_s) = \sum_{l=0}^N a_l \delta((k-l)T_s) \hfill (35)

\displaystyle = M \sum_{l=0}^N \delta((k-l)T_s). \hfill (36)

Let’s modulate this filter by multiplying the normal moving-average impulse-response function (46) by a sequence of alternating +1 and -1 values as in

\displaystyle h(kT_s) = M\sum_{l=0}^N (-1)^l \delta((k-l)T_s). \hfill (37)

What is the frequency-response (transfer) function for this modulated filter? As usual, we need to evaluate the Z transform of h(kT_s) on the unit circle,

\displaystyle H(f) = \left. \underbrace{\sum_{k=0}^N h(kT_s) z^{-k}}_{H(z)} \right|_{z=e^{i 2 \pi f T_s}} \hfill (38)

Preview: Modulating the impulse response by a sequence of alternating \pm 1 values shifts the filter’s transfer function by half the sampling rate. Let’s see exactly how that comes about.

\displaystyle H(f) = \sum_{k=0}^\infty h(kT_s) e^{-i 2 \pi f k T_s} \hfill (39)

\displaystyle = \sum_{k=0}^N M (-1)^k e^{-i 2 \pi f k T_s}. \hfill (40)

Let’s reexpress the factor \displaystyle (-1)^k with an eye toward getting the sum into our usual geometric-series format, which is easily simplified:

\displaystyle (-1)^k = (e^{i \pi})^k = (e^{i 2 \pi})^{k/2} = e^{i 2 \pi (1/2) k} = e^{i 2 \pi (\frac{1}{2T_s}) k T_s}. \hfill (41)

Continuing on with the frequency response using this new expression for (-1)^k,

\displaystyle H(f) = M \sum_{k=0}^N e^{i 2 \pi (\frac{1}{2T_s})kT_s} e^{-i 2 \pi f k T_s} \hfill (42)

\displaystyle = M \sum_{k=0}^N e^{-i2\pi (f - \frac{1}{2T_s}) k T_s} = M \sum_{k=0}^N e^{-i 2 \pi g k T_s}, \hfill (43)

where g = f - frac{1}{2T_s}. Now we have the geometric series expression, and we can simplify,

\displaystyle H(f) = M \left[ \frac{1 - e^{-i 2 \pi g T_s(N+1)}}{1-e^{-i2\pi g T_s}} \right] \hfill (44)

\displaystyle = M e^{-i 2\pi g T_s N} \left[ \frac{\sin \left(\pi gT_s(N+1)\right)}{\sin\left(\pi g T_s\right)} \right]. \hfill (45)

The magnitude of the frequency response, |H(f)|, is given by (substituting back in g = f - 1/2T_s),

\displaystyle |H(f)| = M \left| \frac{\sin(\pi [f - 1/2T_s]T_s (N+1))}{\sin(\pi[f-1/2T_s] T_s)} \right|, \hfill (46)

or \displaystyle |H(f)| = |H_{MA}(f)|, where H_{MA}(f) is the moving-average filter frequency response from (33). So the \pm 1 modulated impulse response corresponds to a frequency-shifted version of the moving-average frequency response. This is illustrated by the various frequency-response plots for this first modulated moving-average filter in Figure 7.

Figure 7. The frequency response function (magnitude) for a \pm-modulated moving-average digital filter (see (46)). Note that the filter transfer-function shapes are the same for each value of N, but the physical bandwidth of the filter changes with varying T_s. The effect of modulating the normal moving-average impulse response by an alternating sequence of +1 and -1 is to convert the lowpass characteristic of the filtering into a highpass characteristic.

A Second Modulated Moving-Average Filter

Let’s generalize this idea of modulating the impulse-response function by considering a final filter where the moving-average impulse response is multiplied by a complex sine wave with frequency f_0

\displaystyle h(kT_s) = e^{i 2 \pi f_0 k T_s} h_{MA}(kT_s), \hfill (47)

where |f_0| \leq \frac{1}{2T_s}. (Note that the previous example simply used f_0 = \frac{1}{2T_s}.) Once again, let’s find and plot the frequency response for this new filter.

The math is nearly identical to that we encountered for the moving-average and modulated-moving-average filters. The result is

\displaystyle H(f) = M e^{-i\pi(f-f_0)NT_s} \left[ \frac{\sin \left(\pi (f-f_0)(N+1)T_s\right)}{\sin \left( \pi(f-f_0)T_s\right)} \right]. \hfill (48)

This last filter has a complex-valued impulse response, unlike the previous two filters. This means that the frequency response will not necessarily be symmetric around f=0. (Think of that first modulated filter. Multiplying by alternating \pm 1 values is equivalent to both multiplying by a sine wave with frequency \frac{1}{2T_s} and multiplying by a sine wave with frequency \frac{-1}{2T_s}.)

Plots of the frequency-response magnitude for this final version of the moving-average filter are shown in Figure 8. Notice that the filter is of the bandpass type. So we have shown that one can create lowpass, bandpass, and highpass filters using just the moving-average impulse response and simple multiplication by a sine wave. Now, how would you create a notch or bandstop filter here?

Figure 8. The frequency response function (magnitude) for a sine-wave-modulated moving-average digital filter (see (48)) with sine-wave frequency f_0 = 0.125/T_s = f_s/8. Note that the filter transfer-function shapes are the same for each value of N, but the physical bandwidth of the filter changes with varying T_s. The effect of modulating the normal moving-average impulse response by a complex sine wave e^{i 2 \pi f_0 k T_s} is to convert the lowpass characteristic of the filtering into a bandpass characteristic.

Comments on the Bandwidth of Digital Filters

Normalized Bandwidth, Normalized Frequencies

In this situation, the sampling increment is taken to be T_s = 1, and so the sampling frequency is f_s = 1/T_s = 1. The first zero of the moving-average frequency response (H(f) in (33)) is then at

\displaystyle \frac{1}{(N+1)T_s} = \frac{1}{N+1}, 

and so a reasonable measure of the filter’s bandwidth is 2/(N+1).

Absolute Bandwidth, Absolute Frequencies

When we want to consider absolute, or physical, frequencies, we retain T_s in all expressions for times and frequencies. So for our simple moving-average filter, the approximate bandwidth is

\displaystyle \frac{2}{(N+1)T_s} = \left( \frac{2}{N+1} \right) f_s.

By choosing f_s very large, say 1 MHz, we find that the filter bandwidth is

\displaystyle \frac{2}{N+1} \times (1 \mbox{\rm \ MHz}) = 250 \mbox{\rm\ kHz}.

The response of the digital moving-average filter to an input sine wave with frequency f_0 = 125 kHz is

\displaystyle H(f = f_0 = 125 \mbox{\rm \ kHz}) = H\left(\frac{f_s}{8}\right) = H\left(\frac{1}{8T_s}\right) = 0,

and the response to a sine-wave input with frequency f_0 = 62.5 kHz is

\displaystyle H(f = f_0 = 62.5 \mbox{\rm \ kHz}) = H\left(\frac{1}{16T_s}\right) = \frac{1}{8\sin(\pi/16)}.

Normalized Digital-Filter Descriptions

For normalized descriptions of digital filters, where T_s = 1 is implied by the employed notation, such as

\displaystyle y(k) = \sum_{l=0}^N a_l x(k-l),

we can perform all our calculations and design and convert to absolute frequencies whenever we wish by multiplying the normalized frequencies by the sampling rate f_s and multiplying the normalized times by the sampling interval T_s.

A Continuous-Time Filtering Problem Solved with a Digital Filter

We’ve seen several examples of the frequency response for digital filters (practical discrete-time filters). We’ve seen a lowpass filter (the moving-average filter (33)), a highpass filter (the modulated moving-average filter (46)) , and a bandpass filter (the sine-wave-shifted moving-average filter (48)). The frequency responses are believable in that they are approximate versions of the continuous-time filter counterparts (but are periodic).

Now let’s try to show the big picture here by starting with a continuous-time filtering problem and determining the digital filter that can be used to solve that problem. That is, we combine elements of continuous- and discrete-time signal processing in one example.

Suppose we want to build a continuous-time system that performs filtering that is lowpass in nature and has a 3-dB bandwidth of 10 kHz. One starting point for this system is to reach back to our practical filters and look at a first-order system described by the simple first-order differential equation

\displaystyle \tau_0 y^\prime(t) + y(t) = x(t). \hfill (49)

The specified 3-dB cutoff frequency for this system is 1/(2\pi \tau_0) radians. This system has transfer function

\displaystyle H(f) = \frac{1}{i 2 \pi f \tau_0 + 1}, \hfill (50)

and impulse-response function

\displaystyle h(t) = {\cal{F}}^{-1} \left[ H(f) \right] \hfill (51)

\displaystyle = \frac{1}{\tau_0} e^{-t/\tau_0} u(t). \hfill (52)

For the discrete-time system component of our overall system, we want the impulse response to equal samples of the continuous-time impulse response. Let’s call the discrete-time impulse response g(kT_s). The situation is illustrated in Figure 9.

Figure 9. Illustration of the full-system example that combines continuous-time and discrete-time (digital) filter concepts. Note that in this example, we are using the notation x^\prime to mean something different than a derivative, which is how we used it near the beginning of the post.

In this setup, x(t) and y(t) are related by h(t), and x^\prime(kT_s) and y^\prime(kT_s) are related by g(kT_s). We want

\displaystyle g(kT_s) = h(kT_s), \hfill (53)

which implies

\displaystyle g(kT_s) = \frac{1}{\tau_0} e^{-kT_s/\tau_0}u(kT_s). \hfill (54)

What is the frequency response for this digital filter? To find it, we should evaluate the Z transform of g(kT_s) on the unit circle, as usual:

\displaystyle G(z) = \sum_{k=0}^\infty g(kT_s) z^{-k} \hfill (55)

\displaystyle = \frac{1}{\tau_0} \sum_{k=0}^\infty e^{-kT_s/\tau_0} z^{-k} \hfill (56)

\displaystyle = \frac{1}{\tau_0} \sum_{k=0}^\infty a^k \ \ \ (a = e^{-T_s/\tau_0} z^{-1}) \hfill (57)

\displaystyle = \frac{1}{\tau_0} \left[ \frac{1}{1-a} \right] \ \ \ (|a| < 1) \hfill (58)

\displaystyle = \frac{1}{\tau_0} \left[ \frac{1}{1 - e^{-T_s/\tau_0}z^{-1}} \right] \ \ \ (|z| > |e^{-T_s/\tau_0}|). \hfill (59)

This region of convergence for the transform, |z| > |e^{-T_s/\tau_0}|, includes the unit circle provided that \tau_0 > 0 and T_s > 0, which is true for our example by definition. That is, we are free to constrain \tau_0 to be positive, and the sampling increment is positive by definition.

We arrive at

\displaystyle G(f) = \left. G(z) \right|_{z=e^{i 2 \pi f T_s}} \hfill (60)

\displaystyle = \frac{1}{\tau_0} \left[ \frac{1}{1-e^{-T_s/\tau_0} e^{-i2\pi f T_s}} \right]. \hfill (61)

Let’s diagram the system and plot the frequency response G(f). From the definition of the discrete-time transfer function, we have

\displaystyle G(z) = \frac{Y^\prime(z)}{X^\prime(z)} = \frac{1}{\tau_0} \left[ \frac{1}{1-e^{-T_s/\tau_0}z^{-1}} \right] \hfill (62)

\displaystyle \Longrightarrow Y^\prime(z) = X^\prime(z) \left[ \frac{1/\tau_0}{1-e^{-T_s/\tau_0}z^{-1}} \right], \hfill (63)

or

\displaystyle \underbrace{Y^\prime(z)}_{\rm Current\ Output} - \underbrace{e^{-T_s/\tau_0}z^{-1}Y^\prime(z)}_{Scaled\ and\ Delayed\ Output} = \underbrace{\frac{1}{\tau_0} X^\prime (z)}_{Scaled\ Input}. \hfill (64)

The block diagram for this system is shown in Figure 10. Note that the gains in the triangular multiplier blocks depend on the desired continuous-time time constant \tau_0 and the sampling increment T_s.

Figure 10. Block diagram for the discrete-time part of the system shown in Figure 9.

Notice that the digital-filter transfer function is periodic with period f_s = 1/T_s (as always). The squared magnitude of the transfer function is

\displaystyle |G(f)|^2 = \frac{1/\tau_0^2}{1 + e^{-2T_s/\tau_0} - 2e^{-T_s/\tau_0}\cos(2\pi f T_s)}. \hfill (65)

The squared magnitude is plotted in decibels in Figure 11 (the figure plots \displaystyle 10\log(|G(f)|^2)).

Figure 11. The magnitude of the digital-filter frequency response for the system shown in Figure 10.

The Significance of Digital Filters in CSP

A prominent digital filter used in CSP is not a time-domain filter, but is a filter that is applied to discrete frequencies in the frequency-smoothing method (FSM) of power-spectrum and spectral-correlation estimation. In the FSM method, the sampled-frequency cyclic periodogram is convolved with a finite-impulse-response (FIR) moving-average filter in order to quell its inherent randomness and allow the estimator to approach the underlying ideal spectral correlation function.

Time-domain FIR filters are good models for real-world multipath propagation channels, and the effects of real-world propagation channels are always of interest in CSP because they affect the observable cyclostationary features of a transmitted signal.

Time-domain FIR filters are central to discrete-time discrete-frequency frequency-shift (FRESH) filters that can be used for single-sensor separation of two or more cochannel and contemporaneous cyclostationary signals.

Digital filters are central to polyphase filterbanks, as we’ve discussed previously, and such filterbanks are crucial efficient structures that enable advanced CSP techniques such as tunneling.

Recursive (infinite-impulse-response) digital filters are used much less in CSP (at least by me), but sometimes are useful in efficiently implementing long-tail “forgetting factor” kinds of filters, whose outputs mostly reflect the most recent inputs but that do not completely discount inputs from the distant past.

Let the CSP Blog readers know if you use digital filters in your CSP work in the Comments. And, as always, feel free to point out any errors you find in this post.

Previous SPTK Post: The Z Transform   Next SPTK Post: IQ Data

Author: Chad Spooner

I'm a signal processing researcher specializing in cyclostationary signal processing (CSP) for communication signals. I hope to use this blog to help others with their cyclo-projects and to learn more about how CSP is being used and extended worldwide.

Leave a Comment, Ask a Question, or Point out an Error

Discover more from Cyclostationary Signal Processing

Subscribe now to keep reading and get access to the full archive.

Continue reading