SPTK: Practical Filters

We know that ideal filters are not physically possible. Here we take our first steps toward practical–buildable–linear time-invariant systems.

Previous SPTK Post: The Laplace Transform Next SPTK Post: The Z Transform

Before we translate the Laplace transform from continuous time to discrete time, deriving the Z transform, let’s take a step back and look at practical filters in continuous time. Practical here stands in opposition to ideal as in the ideal lowpass, highpass, and bandpass filters we studied earlier in the SPTK thread.

Jump straight to Significance of Practical Filters in CSP.

Review of Ideal Filters

Ideal filters are linear time-invariant systems with frequency-response (transfer) functions that are piecewise constant. That is, the transfer functions of ideal filters, H(f), are composed of one or more rectangles. Taking some figures from the post on ideal filters, the ideal lowpass, bandpass, and highpass filters have transfer functions shown by the (a) subplots in Figures 1, 2, and 3 of the present post.

Figure 1. (a) Ideal lowpass filter transfer function and (b) hand-sketched practical (non-ideal) lowpass filter transfer function.
Figure 2. (a) Ideal bandpass filter transfer function and (b) hand-sketched practical (non-ideal) bandpass filter transfer function.
Figure 3. (a) Ideal highpass filter transfer function and (b) hand-sketched practical (non-ideal) highpass filter transfer function.

The ideal filters are ideal in the sense that they perfectly select and reject frequency components–a frequency component of the input signal is either passed (appears in the output) with a scale factor H(f) = G > 0 or it is perfectly rejected (absent in the output) when H(f) = 0.

However, ideal filters are unrealizable, which means that they cannot be constructed using physical elements such as resistors, capacitors, and inductors–the ‘elements’ in the lumped-element systems such as simple passive circuits (no transistors). The basic reason they cannot be built in the real world is that they are non-causal. That is, for a non-causal system, the impulse-response function h(t) (inverse transform of the transfer function H(f)) is non-zero for t < 0. This means that the filter must combine inputs from the past and the future to produce the output at the present time, which is impossible.

In the physical world, we construct time-invariant systems (filters) using various elements, as mentioned above, and the resulting time-domain behavior (such as the output signal given some input signal) can be described in terms of differential equations, as we touched on in the SPTK post on the Laplace transform. The order of the differential equation, which is the value of the highest-order derivative in the equation, determines the complexity of the system. More complex (higher-order) systems can produce more complex transfer functions, and therefore may more closely approximate ideal filters.

In this post, we’ll take a look at first- and second-order linear time-invariant systems that have input-output relations described by linear differential equations. We call such systems practical filters. It might be helpful to point out that the practical filters discussed here are quite general in that the very same equations model electrical-engineering systems like lumped circuits and mechanical systems involving elements such as springs, masses, and dashpots. I’m sure they describe other physical systems too. So we’re not just doing signal processing in the electrical-engineering context here.

First-Order Practical Filters

These filters are governed by the simple first-order linear differential equation

\displaystyle \tau y^\prime (t) + y(t) = x(t) \hfill (1)

where x(t) is interpreted as the filter input, y(t) is the filter output, and \tau is a constant with significance that will become apparent as we develop solutions to the equation.

As usual, we’d like to find the impulse-response function and the transfer function for this system, but to do that we should make sure that (1) really does correspond to a linear time-invariant system, for only such systems have well-defined impulse responses and transfer functions of the sort we’ve been developing and using in the SPTK series of CSP Blog posts.

Suppose the pair of signals \{x_1(t), y_1(t)\} obeys (1) with x(t) = x_1(t) and y(t) = y_1(t), and the same for \{x_2(t), y_2(t)\}. Then we have the two equations given by

\displaystyle \tau y_1^\prime(t) + y_1(t) = x_1(t) \hfill (2)

\displaystyle \tau y_2^\prime(t) + y_2(t) = x_2(t) \hfill (3)

Then if we simply add these two equations together, we obtain another equation, which is given by

\displaystyle \tau \left( y_1^\prime(t) + y_2^\prime(t)\right) + \left(y_1(t) + y_2(t)\right) = \left(x_1(t) + x_2(t)\right) \hfill (4)

Then because the derivative is linear, we have

\displaystyle \tau \frac{d}{dt} \left(y_1(t) + y_2(t)\right) + \left(y_1(t) + y_2(t)\right) = \left(x_1(t) + x_2(t)\right) \hfill (5)

which implies that the output y_1(t) + y_2(t) corresponds to the input x_1(t) + x_2(t), and therefore that the output for the sum of two arbitrary inputs is the sum of the outputs for those inputs, establishing linearity of the underlying system that gives rise to the differential equation.

Since (1) is valid for any time t, choose t = w + T, which shows that a delayed (or advanced) input x(w + T) gives rise to the output y(w + T), establishing time-invariance.

A typical way to proceed with solving such differential equations is through transform techniques. This simply means applying a well-defined and well-behaved transformation (operation) to both sides of the equation and then using algebra to solve for the desired quantity, say the impulse response or transfer function. The transform can be the Fourier or Laplace transform or, as we’re leading up to in this part of the SPTK sequence, the Z transform for discrete-time systems.

Let’s carefully apply the Fourier transform to each side of (1), keeping in mind our usual notation that links the time and frequency domains, such as {\cal{F}}[x(t)] = X(f) and X(f) \Longleftrightarrow x(t). The analysis looks like this

\displaystyle {\cal{F}}\left[ \tau y^\prime (t) + y(t) \right] = {\cal{F}}\left[x(t)\right] \hfill (6)

\displaystyle \tau (i2\pi f) Y(f) + Y(f) = X(f), \hfill (7)

where we took advantage of a result we derived previously, which is \displaystyle {\cal{F}}\left[y^\prime(t)\right] \Longleftrightarrow (i 2 \pi f) Y(f).

If X(f) \neq 0, we can rearrange this equation to yield an expression for the transfer function Y(f)/X(f),

\displaystyle \frac{Y(f)}{X(f)} = \frac{1}{i2\pi f \tau + 1} \hfill (8)

Recall from (13) in the Laplace Transform post that the Fourier transform of the causal decaying exponential e^{-at}u(t) is a simple rational function in f

\displaystyle e^{-at}u(t) \Longleftrightarrow \frac{1}{i2\pi f + a}, \ \ \ a > 0. \hfill (9)

We can write the transfer function in the appropriate form with a little algebra

\displaystyle H(f) = \frac{Y(f)}{X(f)} = \frac{1}{i2\pi f \tau + 1} = \frac{1}{\tau} \left[\frac{1}{i2\pi f + \tau^{-1}}\right], \hfill (10)

and therefore by inspection, we have an expression for the impulse-response function for the first-order practical filter,

\displaystyle h(t) = {\cal{F}}^{-1} [H(f)] = \frac{1}{\tau} e^{-t/\tau} u(t). \hfill (11)

We have found expressions for the impulse-response function and the transfer function. We’ll want to plot these and investigate their behavior as a function of \tau, but first let’s obtain one more important function: the system response to a unit-step-function input, more commonly known as the step response.

Step-Response for the First-Order Practical Filter

We have an impulse response h(t) in (11) and an input of interest, x(t) = u(t). We can employ the input-output relation for a linear time-invariant system, which is that the output y(t) is the convolution of the input x(t) with the impulse response h(t),

\displaystyle y(t) = y_{sr}(t) = x(t) \otimes h(t) = \int_{-\infty}^\infty x(v) h(t-v)\, dv \hfill (12)

\displaystyle = \int_0^\infty h(t-v) \, dv \hfill (13)

\displaystyle = \int_0^\infty \frac{1}{\tau} e^{-(t-v)/\tau} u(t-v) \, dv \hfill (14)

Figure 4. Illustration of the integrand factor u(t-v) in (14) for various values of t.

This convolution is only non-zero, potentially, for t\ge 0 because u(t-v) is zero for t < 0 over the region v \in [0, \infty) (see Figure 4.)

Keeping in mind that the solution to (14) involves t \ge 0, we can evaluate the integral easily,

\displaystyle y_{sr}(t) = \int_0^t \frac{1}{\tau} e^{-(t-v)/\tau} \, dv \ \ \ (t \ge 0) \hfill (15)

\displaystyle = \frac{1}{\tau} e^{-t/\tau} \int_0^t e^{v/\tau} \, dv \ \ \ (t \ge 0) \hfill (16)

\displaystyle = \frac{1}{\tau} e^{-t/\tau} \left. \frac{e^{v/\tau}}{1/\tau} \right|_{v=0}^t \ \ \ (t \ge 0) \hfill (17)

\displaystyle = e^{-t/\tau} \left[ e^{t/\tau} - e^0\right] \ \ \ (t \ge 0) \hfill (18)

\displaystyle = \left[1 - e^{-t/\tau} \right] u(t) = y_{sr}(t). \hfill (19)

Notice that for any \tau > 0, as t \rightarrow \infty, the step response approaches one, and so eventually the response to the step function mirrors the step function. (What kind of filter is that?)

The Influence of \tau on h(t), H(f), and y_{sr}(t)

Transfer Function

Notice that the transfer function cannot be zero and is well-behaved for all frequencies f. This is because the magnitude of the denominator is

\displaystyle |i2\pi f \tau + 1| = \left[ (2\pi f \tau)^2 + 1 \right]^{1/2} \hfill (20)

which can’t be zero for any combination of real f and \tau. The maximum of the transfer function is at f=0, because the maximum will correspond to the minimum of the denominator, and elementary calculus tells us that is at f=0. So the transfer function peaks at f=0. How fast does it decay as we increase f?

Let’s look at a related function, which is the squared magnitude of H(f) expressed in decibels

\displaystyle G(f) = 10\log_{10} (|H(f)|^2) \hfill (21)

This function simplifies to

\displaystyle G(f) = -10 \log_{10} \left( 4\pi^2 f^2 \tau^2 + 1\right) \hfill (22)

for which it is easy to fill out an approximate table of values using f = k/(2\pi\tau),

f\approx G(f)
00
(1/2)/(2\pi \tau)-1
1/(2\pi\tau)-3
3/(2\pi\tau)-10
10/(2\pi\tau)-20
Table 1. Key values of the practical first-order filter transfer function.

Plots of G(f) for three values of \tau are shown in Figure 5.

Figure 5. The squared-magnitude of the first-order transfer function for various values of \tau, expressed in decibels (G(f)). The dotted vertical lines show where the transfer function has decreased by 3 dB from its peak at f=0.

From the table or the plot, we can see that the transfer function decays to about half its peak at the frequency f = 1/(2\pi\tau) and to about one-tenth of its peak at the frequency f = 3/(2\pi\tau). Unlike the ideal filters, where the bandwidth of the filter can be unambiguously determined by the width of an appropriate rectangle (see Figures 1–3), practical filters have transfer functions that smoothly vary and are lump- or bump-like, in a manner highly reminiscent of the power spectra for communication signals. When we developed the sampling theorem, we encountered the problem of specifying the “maximum frequency” or the “bandwidth” of a signal with a smooth lump-like spectrum, and that led to the realization that there is no unambiguous or always-preferred measure for the bandwidth of a real-world signal.

In the case of practical filters, we have the same problem as with signals–how should we think about, specify, or constrain the bandwidth of the passband or stopband of a practical filter? Well, we generally do the same thing as for signals. So here, for the first-order filters, we can characterize the filter in terms of its 3-dB bandwidth (see Figure 5), its 10-dB bandwidth (see Table 1), a 20-dB bandwidth, a 99% bandwidth, etc.

Since the bandwidth of the filter, however we define bandwidth, is clearly a function of \tau, we see that the \tau parameter in the original differential equation controls the bandwidth of the equivalent filter, and that that filter is a lowpass filter.

Impulse Response

The impulse-response function is given by (11) and is a simple decaying exponential function. For t=K\tau, the value of the impulse response is e^{-K}/\tau = e^{-K} h(0). So by t=3\tau, the response has decayed to about one-twentieth of its peak at t =0. Figure 6 shows plots of h(t) for the same three values of \tau used in Figure 5.

Figure 6. Impulse-response functions for the first-order filter. These impulse responses correspond to the three transfer functions shown in Figure 5.

The energy of the impulse-response function on t \in (-\infty, T] is

\displaystyle E_T = \int_{-\infty}^T h^2(t) \, dt = \int_0^T \frac{1}{\tau^2} e^{-2t/\tau} \, dt \hfill (23)

\displaystyle = \frac{1}{\tau^2} \left. \frac{e^{-2t/\tau}}{-2/\tau} \right|_{t=0}^{t=T} \hfill (24)

\displaystyle = \frac{1}{2\tau} \left[ 1 - e^{-2T/\tau} \right]. \hfill (25)

The energy is 1/2\tau for T \rightarrow \infty and for T=3\tau,

\displaystyle E_{3\tau} = \frac{1}{2\tau}[1-e^{-6}] \approx \frac{1}{2\tau} = E_{\infty}. \hfill (26)

So \tau determines the energy in the impulse response and the function is well-approximated by restricting it to the interval t \in [0, 3\tau]. Note that even the interval [0, \tau] is a good approximation,

\displaystyle E_\tau = \frac{1}{2\tau} [1-e^{-1}] \approx \frac{8}{9} E_\infty \hfill (27)

We can, therefore, interpret \tau as a time constant for the system, which means that, to a good approximation, fluctuations in the input signal occurring over a time interval of about \tau seconds are combined to yield an output, but values of the input that are separated by U \gg \tau are not. This behavior is consistent with our interpretation of a lowpass filter (which the first-order system definitely is, see Figure 5) as a kind of moving-average filter.

Step Response

The step response (19) is plotted for the three \tau values of interest in Figure 7. We note that the response achieves about 2/3 of its final value of one by t = \tau, and after 2\tau seconds, the response is within about $10%$ of its final value. These empirical facts simply cement the idea of \tau as a time constant controlling the temporal (and therefore spectral) behavior of the system outputs.

Figure 7. Step-response function for the first-order system described by (1) and (11). These curves show how the system output evolves with time if the input is a unit-step function u(t).

Before changing topics, let’s take a quick look at the phase of H(f). Recall that for a filter to provide a delayed version of its input, the phase of the transfer function must be a linear function over the passband(s) of the filter. (This arises from consideration of the transfer function of a pure-delay system h(t) = A\delta(t-D).) The phase is shown in Figure 8, along with vertical dotted lines that remind us of the 3-dB bandwidths of the three filters.

Figure 8. Phase of the first-order transfer function. Is the phase approximately linear over the passbands of the filters? The dotted vertical lines indicate the 3-dB bandwidths we noted in Figure 5.

Second-Order Practical Filters

Let’s take the next step forward and increase the order of the governing differential equation by one, yielding the second-order filters,

\displaystyle y^{\prime\prime}(t) + 2k\omega y^\prime(t) + \omega^2 y(t) = \omega^2 x(t), \hfill (28)

where k > 0 and \omega > 0.

I’ll leave it to you to check whether the effective system defined by input x(t) and output y(t) is linear and time-invariant. I’ll proceed as if it is.

We know that the derivatives of y(t) are related to the transform of y(t) by the following relations

\displaystyle y^{\prime\prime}(t) \Longleftrightarrow (i2\pi f)^2 Y(f) \hfill (29)

\displaystyle y^{\prime}(t) \Longleftrightarrow (i2\pi f) Y(f), \hfill (30)

so that Fourier transforming both sides of (28) is simple,

\displaystyle [(i2\pi f)^2 + 2k\omega(i2\pi f) + \omega^2]Y(f) = \omega^2 X(f) \hfill (31)

which immediately allows us to solve for the transfer function (yay transform techniques!)

\displaystyle H(f) = \frac{Y(f)}{X(f)} = \frac{\omega^2}{(i2\pi f)^2 + 2k\omega(i2\pi f) + \omega^2} \hfill (32)

To find the impulse-response function we need to inverse transform the transfer function (32). We begin by reexpressing the rational function by using $latex $g = 2\pi f$ and some algebra to yield

\displaystyle H(f) = \frac{\omega^2}{-g^2 + i 2 k \omega g + \omega^2} \hfill (33)

\displaystyle = \frac{1}{(ig/\omega)^2 + \frac{i2kg}{\omega} + 1} \hfill (34)

If we can factor the denominator we can then use the known Fourier transform of e^{-at}u(t) to inverse-transform the result. Applying the trusty quadratic formula, we obtain

\displaystyle (ig/\omega)^2 - 2k(ig/\omega) + 1 = (ig/\omega - (-k+\sqrt{k^2-1}))(ig/\omega - (-k-\sqrt{k^2-1})) \hfill (35)

\displaystyle = \left(\frac{ig}{\omega} - r_1\right)\left(\frac{ig}{\omega}-r_2\right) \hfill (36)

Using more elementary algebra we obtain results for both k \neq 1 and k=1,

\displaystyle H(f) = \frac{1}{(ig/\omega + 1)^2}, \ \ \ k = 1 \hfill (37)

and

\displaystyle H(f) = \frac{\omega/2\sqrt{k^2-1}}{(ig - r_1\omega)} - \frac{\omega/2\sqrt{k^2-1}}{(ig - r_2\omega)}, \ \ \ k >0, k \neq 1. \hfill (38)

Now we can use our knowledge of Fourier transform pairs to find expressions for the impulse-response functions for k=1 and k \neq 1. Recall the basic transform pairs relating to the kinds of simple rational functions in (37) and (38) are

\displaystyle \frac{1}{i2\pi f + a} \Longleftrightarrow e^{-at}u(t) \hfill (39)

\displaystyle \left( \frac{1}{i2\pi f + a} \right)^2 \Longleftrightarrow t e^{-at}u(t). \hfill (40)

This immediately leads to the following result for k=1,

\displaystyle h(t) = \omega^2 t e^{-\omega t} u(t), \ \ \ k=1. \hfill (41)

For k>1, both roots r_1 and r_2 are real because k^2 - 1 > 0. The factorization of H(f) can be written as

\displaystyle H(f) = \frac{B}{i2\pi f - \omega r_1} - \frac{B}{i2\pi f - \omega r_2} \hfill (42)

with \displaystyle B = \omega/2\sqrt{k^2-1}. The inverse transform follows easily

\displaystyle h(t) = \left[ B e^{r_1\omega t} - Be^{r_2\omega t} \right] u(t) \hfill (43)

\displaystyle = \frac{\omega}{2\sqrt{k^2-1}} e^{-k\omega t} \left[ e^{\omega\sqrt{k^2-1} t} - e^{-\omega\sqrt{k^2-1} t} \right] u(t), \ \ \ k>1. \hfill (44)

Finally, for k < 1, the two roots are r_1 = -k + i\sqrt{1-k^2} and r_2 = -k -i\sqrt{1-k^2}, so similar algebra and use of the frequency-shifting property of the Fourier transform

X(f-f_0) \Longleftrightarrow x(t)e^{i2\pi f_0 t} \hfill (45)

leads to the result

\displaystyle h(t) = \frac{\omega}{\sqrt{1-k^2}} e^{-k \omega t} \sin\left(\omega \sqrt{1-k^2}t\right) u(t), \ \ \ 0 < k < 1. \hfill (46)

For the step response, we’ll forgo the derivation of the formula and just convolve the impulse response with a step function in MATLAB.

The Influence of k and \omega on h(t), H(f), and y_{sr}(t)

Like the first-order filters, the practical second-order filters that arise from differential equations that relate system input to system output are lowpass filters. The combination of \omega and k determine the bandwidth of the lowpass filter (however you define it). For fixed k, increasing \omega increases the bandwidth, and for fixed \omega, increasing k decreases the bandwidth. These trends, and the general shapes of the obtainable transfer functions are shown in Figure 9.

Figure 9. Second-order filter transfer functions for various combinations of \omega and k. Note the lack of a resemblance to the transfer functions of ideal filters.

The filters have approximately linear phase across their passbands as illustrated in Figure 10. This means that the input signal is shaped by the transfer function, selecting largely the signal’s frequency components near zero frequency, but overall the signal is also delayed–distinct frequency components are delayed appropriately so that the overall signal is not significantly distorted beyond the selection aspect.

Figure 10. Phase characteristics of the second-order filters for various combinations of \omega and k.

The impulse-response functions corresponding to the transfer functions in Figures 9 and 10 are shown in Figure 11. Compare these to the impulse-response functions for the first-order filters in Figure 6. These functions are more complicated, indicating that they can ‘do more’ than their first-order cousins.

Figure 11. Impulse-response functions for the second-order filter transfer functions shown in Figures 9 and 10. Note the relatively high complexity compared to the first-order impulse-response functions in Figure 6. The functions are more wiggly for smaller k and more stretched out for larger \omega.

Turning to the step response, we obtain the plots shown in Figure 12. Several common filtering and control-system terms are typically introduced with this kind of plot (see also Figure 13). One is overshoot, which quantifies the maximum amount that the step response exceeds its eventual long-term value (here unity); settling time, which quantifies the time needed for the step response to achieve some small error relative to its long-term value; and ringing, which describes the oscillatory nature of the response as it moves towards the settling time.

Figure 12. System responses to a unit-step-function (u(t)) input for various second-order practical filters. Compare with the step responses for the first-order filters in Figure 7.
Figure 13. Annotated version of the step response in Figure 12. Here we point out the basic features of the second-order step response of overshoot, settling time, and ringing.

Finally, it is worth comparing these practical-filter functions with those for an ideal filter. In Figure 14 I’ve plotted the transfer function, impulse-response function, and step-response function for an ideal lowpass filter with bandwidth equal to the 10-dB bandwidth of the second-order filter with \omega = 0.5 and k = 1.5 in Figure 9. As we studied in the post on ideal filters, to make such filters causal requires truncation and delay of the impulse-response function, which is clearly non-zero for t<0. Truncating to [-20, 20] and shifting by 20 leads to the step response in the lower plot of Figure 14. We see the ringing, overshoot, and settling, but we have to live with the large delay as well. The practical filters do not have that latter feature.

Figure 14. Transfer function, impulse-response function, and step-response function for an ideal filter with bandwidth equal to the 10-dB bandwidth of the practical second-order filter in Figure 9 for \omega = 0.5 and k=1.5.

Discussion

The practical filters we’ve looked at here in this SPTK post are a first step away from the ideal filters we used to introduce the basic concepts and functions involved in linear time-invariant system analysis (filtering). There are many many more kinds of practical filters that allow all kinds of engineering tradeoffs between complexity (how hard it is to do the convolution) and performance (passing frequency components of interest and attenuating those not of interest). Examples are Butterworth, Chebeyshev, Gaussian, and others.

Our trajectory in the SPTK thread is to move toward digital filters and the mathematics that pertains thereto. We are now in a position to introduce a major analysis tool for digital filters called the Z transform.

Significance of Practical Filters in CSP

Not much! But we are building up to digital filters, which include the ubiquitous finite-impulse-response (FIR) and infinite-impulse-response (IIR) filters.

One could use a filter of the sort we’ve studied here in an algorithm such as the frequency-smoothing method for spectral correlation estimation, instead of a simple rectangular moving-average filter. The effect on the resulting estimate would likely be salubrious, but the computational cost would be much increased, as the moving-average filter (smoother) is extremely cheap to implement.

Previous SPTK Post: The Laplace Transform Next SPTK Post: The Z Transform

Author: Chad Spooner

I'm a signal processing researcher specializing in cyclostationary signal processing (CSP) for communication signals. I hope to use this blog to help others with their cyclo-projects and to learn more about how CSP is being used and extended worldwide.

2 thoughts on “SPTK: Practical Filters”

  1. Hey Chad! Cool post. Hope to see more SPTK posts like this one.

    I think it is worth explaining why the step responses for these two classes of practical filters asymptotically approach unity as time increases. As you point out, by looking at the transfer function plots, all these filters are lowpass filters, and the filter gain H(f) for f=0 is always unity. Looking at the impulse-response functions, we can see that although they are not strictly duration-limited, they have reasonably short (in time) effective support regions, so that at any time t, the output of the filter combines, again effectively, only a small section of the past input signal to form the output signal. When time has progressed very far from the moment at which the step function was applied, the filter is combining past inputs that are all constant (equal to one). So it is as if the filter is being applied to a constant, that is, to a sine-wave input with frequency zero, for output times very far from the step. Since the filter is lowpass with unit gain for frequency zero, the output must be equal to the input. So, the step responses all asymptotically approach one.

  2. I appreciate that you aren’t roaming the city streets causing trouble and are instead writing posts like this one, but man, Chad, this material is as stale as a week-old baguette. Maybe switch to something a little more modern? Won’t you please think of the young people that need help with today’s engineering? You know, like ML?

    Also, your plots are usually OK, but the color scheme in that last figure is just terrible. Going colorblind?

Leave a Comment, Ask a Question, or Point out an Error

Discover more from Cyclostationary Signal Processing

Subscribe now to keep reading and get access to the full archive.

Continue reading