Before we translate the Laplace transform from continuous time to discrete time, deriving the Z transform, let’s take a step back and look at practical filters in continuous time. Practical here stands in opposition to ideal as in the ideal lowpass, highpass, and bandpass filters we studied earlier in the SPTK thread.
Review of Ideal Filters
Ideal filters are linear time-invariant systems with frequency-response (transfer) functions that are piecewise constant. That is, the transfer functions of ideal filters, , are composed of one or more rectangles. Taking some figures from the post on ideal filters, the ideal lowpass, bandpass, and highpass filters have transfer functions shown by the (a) subplots in Figures 1, 2, and 3 of the present post.
The ideal filters are ideal in the sense that they perfectly select and reject frequency components–a frequency component of the input signal is either passed (appears in the output) with a scale factor or it is perfectly rejected (absent in the output) when .
However, ideal filters are unrealizable, which means that they cannot be constructed using physical elements such as resistors, capacitors, and inductors–the ‘elements’ in the lumped-element systems such as simple passive circuits (no transistors). The basic reason they cannot be built in the real world is that they are non-causal. That is, for a non-causal system, the impulse-response function (inverse transform of the transfer function ) is non-zero for . This means that the filter must combine inputs from the past and the future to produce the output at the present time, which is impossible.
In the physical world, we construct time-invariant systems (filters) using various elements, as mentioned above, and the resulting time-domain behavior (such as the output signal given some input signal) can be described in terms of differential equations, as we touched on in the SPTK post on the Laplace transform. The order of the differential equation, which is the value of the highest-order derivative in the equation, determines the complexity of the system. More complex (higher-order) systems can produce more complex transfer functions, and therefore may more closely approximate ideal filters.
In this post, we’ll take a look at first- and second-order linear time-invariant systems that have input-output relations described by linear differential equations. We call such systems practical filters. It might be helpful to point out that the practical filters discussed here are quite general in that the very same equations model electrical-engineering systems like lumped circuits and mechanical systems involving elements such as springs, masses, and dashpots. I’m sure they describe other physical systems too. So we’re not just doing signal processing in the electrical-engineering context here.
First-Order Practical Filters
These filters are governed by the simple first-order linear differential equation
where is interpreted as the filter input, is the filter output, and is a constant with significance that will become apparent as we develop solutions to the equation.
As usual, we’d like to find the impulse-response function and the transfer function for this system, but to do that we should make sure that (1) really does correspond to a linear time-invariant system, for only such systems have well-defined impulse responses and transfer functions of the sort we’ve been developing and using in the SPTK series of CSP Blog posts.
Suppose the pair of signals obeys (1) with and , and the same for . Then we have the two equations given by
Then if we simply add these two equations together, we obtain another equation, which is given by
Then because the derivative is linear, we have
which implies that the output corresponds to the input , and therefore that the output for the sum of two arbitrary inputs is the sum of the outputs for those inputs, establishing linearity of the underlying system that gives rise to the differential equation.
Since (1) is valid for any time , choose , which shows that a delayed (or advanced) input gives rise to the output , establishing time-invariance.
A typical way to proceed with solving such differential equations is through transform techniques. This simply means applying a well-defined and well-behaved transformation (operation) to both sides of the equation and then using algebra to solve for the desired quantity, say the impulse response or transfer function. The transform can be the Fourier or Laplace transform or, as we’re leading up to in this part of the SPTK sequence, the Z transform for discrete-time systems.
Let’s carefully apply the Fourier transform to each side of (1), keeping in mind our usual notation that links the time and frequency domains, such as and . The analysis looks like this
where we took advantage of a result we derived previously, which is .
If , we can rearrange this equation to yield an expression for the transfer function ,
Recall from (13) in the Laplace Transform post that the Fourier transform of the causal decaying exponential is a simple rational function in
We can write the transfer function in the appropriate form with a little algebra
and therefore by inspection, we have an expression for the impulse-response function for the first-order practical filter,
We have found expressions for the impulse-response function and the transfer function. We’ll want to plot these and investigate their behavior as a function of , but first let’s obtain one more important function: the system response to a unit-step-function input, more commonly known as the step response.
Step-Response for the First-Order Practical Filter
We have an impulse response in (11) and an input of interest, . We can employ the input-output relation for a linear time-invariant system, which is that the output is the convolution of the input with the impulse response ,
This convolution is only non-zero, potentially, for because is zero for over the region (see Figure 4.)
Keeping in mind that the solution to (14) involves , we can evaluate the integral easily,
Notice that for any , as , the step response approaches one, and so eventually the response to the step function mirrors the step function. (What kind of filter is that?)
The Influence of on , and
Notice that the transfer function cannot be zero and is well-behaved for all frequencies . This is because the magnitude of the denominator is
which can’t be zero for any combination of real and . The maximum of the transfer function is at , because the maximum will correspond to the minimum of the denominator, and elementary calculus tells us that is at . So the transfer function peaks at . How fast does it decay as we increase ?
Let’s look at a related function, which is the squared magnitude of expressed in decibels
This function simplifies to
for which it is easy to fill out an approximate table of values using ,
Plots of for three values of are shown in Figure 5.
From the table or the plot, we can see that the transfer function decays to about half its peak at the frequency and to about one-tenth of its peak at the frequency . Unlike the ideal filters, where the bandwidth of the filter can be unambiguously determined by the width of an appropriate rectangle (see Figures 1–3), practical filters have transfer functions that smoothly vary and are lump- or bump-like, in a manner highly reminiscent of the power spectra for communication signals. When we developed the sampling theorem, we encountered the problem of specifying the “maximum frequency” or the “bandwidth” of a signal with a smooth lump-like spectrum, and that led to the realization that there is no unambiguous or always-preferred measure for the bandwidth of a real-world signal.
In the case of practical filters, we have the same problem as with signals–how should we think about, specify, or constrain the bandwidth of the passband or stopband of a practical filter? Well, we generally do the same thing as for signals. So here, for the first-order filters, we can characterize the filter in terms of its 3-dB bandwidth (see Figure 5), its 10-dB bandwidth (see Table 1), a 20-dB bandwidth, a 99% bandwidth, etc.
Since the bandwidth of the filter, however we define bandwidth, is clearly a function of , we see that the parameter in the original differential equation controls the bandwidth of the equivalent filter, and that that filter is a lowpass filter.
The impulse-response function is given by (11) and is a simple decaying exponential function. For , the value of the impulse response is . So by , the response has decayed to about one-twentieth of its peak at . Figure 6 shows plots of for the same three values of used in Figure 5.
The energy of the impulse-response function on is
The energy is for and for ,
So determines the energy in the impulse response and the function is well-approximated by restricting it to the interval . Note that even the interval is a good approximation,
We can, therefore, interpret as a time constant for the system, which means that, to a good approximation, fluctuations in the input signal occurring over a time interval of about seconds are combined to yield an output, but values of the input that are separated by are not. This behavior is consistent with our interpretation of a lowpass filter (which the first-order system definitely is, see Figure 5) as a kind of moving-average filter.
The step response (19) is plotted for the three values of interest in Figure 7. We note that the response achieves about 2/3 of its final value of one by , and after seconds, the response is within about $10%$ of its final value. These empirical facts simply cement the idea of as a time constant controlling the temporal (and therefore spectral) behavior of the system outputs.
Before changing topics, let’s take a quick look at the phase of . Recall that for a filter to provide a delayed version of its input, the phase of the transfer function must be a linear function over the passband(s) of the filter. (This arises from consideration of the transfer function of a pure-delay system .) The phase is shown in Figure 8, along with vertical dotted lines that remind us of the 3-dB bandwidths of the three filters.
Second-Order Practical Filters
Let’s take the next step forward and increase the order of the governing differential equation by one, yielding the second-order filters,
where and .
I’ll leave it to you to check whether the effective system defined by input and output is linear and time-invariant. I’ll proceed as if it is.
We know that the derivatives of are related to the transform of by the following relations
so that Fourier transforming both sides of (28) is simple,
which immediately allows us to solve for the transfer function (yay transform techniques!)
To find the impulse-response function we need to inverse transform the transfer function (32). We begin by reexpressing the rational function by using $latex $g = 2\pi f$ and some algebra to yield
If we can factor the denominator we can then use the known Fourier transform of to inverse-transform the result. Applying the trusty quadratic formula, we obtain
Using more elementary algebra we obtain results for both and ,
Now we can use our knowledge of Fourier transform pairs to find expressions for the impulse-response functions for and . Recall the basic transform pairs relating to the kinds of simple rational functions in (37) and (38) are
This immediately leads to the following result for ,
For , both roots and are real because . The factorization of can be written as
with . The inverse transform follows easily
Finally, for , the two roots are and , so similar algebra and use of the frequency-shifting property of the Fourier transform
leads to the result
For the step response, we’ll forgo the derivation of the formula and just convolve the impulse response with a step function in MATLAB.
The Influence of and on , and
Like the first-order filters, the practical second-order filters that arise from differential equations that relate system input to system output are lowpass filters. The combination of and determine the bandwidth of the lowpass filter (however you define it). For fixed , increasing increases the bandwidth, and for fixed , increasing decreases the bandwidth. These trends, and the general shapes of the obtainable transfer functions are shown in Figure 9.
The filters have approximately linear phase across their passbands as illustrated in Figure 10. This means that the input signal is shaped by the transfer function, selecting largely the signal’s frequency components near zero frequency, but overall the signal is also delayed–distinct frequency components are delayed appropriately so that the overall signal is not significantly distorted beyond the selection aspect.
The impulse-response functions corresponding to the transfer functions in Figures 9 and 10 are shown in Figure 11. Compare these to the impulse-response functions for the first-order filters in Figure 6. These functions are more complicated, indicating that they can ‘do more’ than their first-order cousins.
Turning to the step response, we obtain the plots shown in Figure 12. Several common filtering and control-system terms are typically introduced with this kind of plot (see also Figure 13). One is overshoot, which quantifies the maximum amount that the step response exceeds its eventual long-term value (here unity); settling time, which quantifies the time needed for the step response to achieve some small error relative to its long-term value; and ringing, which describes the oscillatory nature of the response as it moves towards the settling time.
Finally, it is worth comparing these practical-filter functions with those for an ideal filter. In Figure 14 I’ve plotted the transfer function, impulse-response function, and step-response function for an ideal lowpass filter with bandwidth equal to the 10-dB bandwidth of the second-order filter with and in Figure 9. As we studied in the post on ideal filters, to make such filters causal requires truncation and delay of the impulse-response function, which is clearly non-zero for . Truncating to and shifting by leads to the step response in the lower plot of Figure 14. We see the ringing, overshoot, and settling, but we have to live with the large delay as well. The practical filters do not have that latter feature.
The practical filters we’ve looked at here in this SPTK post are a first step away from the ideal filters we used to introduce the basic concepts and functions involved in linear time-invariant system analysis (filtering). There are many many more kinds of practical filters that allow all kinds of engineering tradeoffs between complexity (how hard it is to do the convolution) and performance (passing frequency components of interest and attenuating those not of interest). Examples are Butterworth, Chebeyshev, Gaussian, and others.
Our trajectory in the SPTK thread is to move toward digital filters and the mathematics that pertains thereto. We are now in a position to introduce a major analysis tool for digital filters called the Z transform.
Significance of Practical Filters in CSP
Not much! But we are building up to digital filters, which include the ubiquitous finite-impulse-response (FIR) and infinite-impulse-response (IIR) filters.
One could use a filter of the sort we’ve studied here in an algorithm such as the frequency-smoothing method for spectral correlation estimation, instead of a simple rectangular moving-average filter. The effect on the resulting estimate would likely be salubrious, but the computational cost would be much increased, as the moving-average filter (smoother) is extremely cheap to implement.