Previous SPTK Post: Convolution Next SPTK Post: The Moving-Average Filter
We continue with our non-CSP signal-processing tool-kit series with this post on ideal filtering. Ideal filters are those filters with transfer functions that are rectangular, step-function-like, or combinations of rectangles and step functions.
[Jump straight to ‘Significance of Ideal Filters in CSP’ below.]
A filter is a designed linear time-invariant system. The goals of filtering are:
- Remove unwanted signal and/or noise energy while preserving wanted signal energy, or
- Restore a signal to a proscribed condition.
Filters are categorized in terms of their frequency response function. We refer to a frequency component with frequency as being passed by the filter if the corresponding value of the transfer function
is non-zero, and refer to it as being rejected otherwise. Those filters that pass low frequencies but do not pass high frequencies are lowpass filters, those that pass high frequencies but do not pass low frequencies are highpass filters, and those that pass frequencies only in some middle range are bandpass filters. Less common are those that pass all frequencies except those in some narrow range, which are notch filters (or bandstop filters), and those that pass all frequencies but adjust their phases, which are called all-pass filters.
Ideal Lowpass Filters
A lowpass filter is one that passes only low-frequency sine-wave components in the input signal, and rejects or significantly attenuates all others. This is most easily viewed in the frequency domain through the filter’s frequency response (transfer function), as shown in Figure 1.

For the ideal LPF, the frequency interval is the passband, the frequency
is called the cutoff frequency, the height of the passband is the gain
, the passband width, or just bandwidth, of the filter is
, and the two frequency intervals
and
are the stopbands.
Let’s also look at the impulse-response function for the ideal LPF, which is simply the inverse transform of the frequency-response function,
The impulse-response function for the ideal LPF is shown in Figure 2. Recall that a causal linear time-invariant (LTI) system (filter) must have an impulse response that is zero for , which is clearly not true of the function in Figure 2. Therefore, the ideal LPF is not realizable, which means constructable or buildable. You can’t actually make it in the real world, because part of the response to the input impulse at
occurs before the impulse is applied to the system.
But suppose we are willing to tolerate a delay between the time we apply an input signal to the ideal LPF and the time we begin to see an output–perhaps a large delay.

Then we can simply delay the impulse response so that most of its energy lies to the right of , as shown in Figure 3. The impulse-response function for the delayed ideal LPF is then
which is illustrated in Figure 3 for .

For the delayed-output filter, the energy in the impulse response for can be made as small as desired by choosing
large enough. Therefore, a noncausal ideal LPF can be converted to a causal one, approximately, by the use of a large delay. What is the transfer function (frequency response) for the delayed-output impulse response?
So we see that the magnitudes of and
are the same, but their phases differ. By construction,
is real, and so it has a phase of zero. Therefore, the phase of the delayed-output system is
which is a linear function of frequency . Therefore, linear transfer-function phase is equivalent to a constant delay in time between the input and the output of the system.
Another way to bridge the gap between ideal and practical filters is to take a cue from the ideal LPF impulse response: it is largely concentrated near . So let’s create a causal impulse response function with all of its energy near
. Does the resulting filter correspond to a lowpass type of frequency response? The impulse response is shown in Figure 4. To find its frequency response, we need to Fourier transform it.




Looking at the magnitude of in (11) (Figure 5), we see that the answer is yes, this is indeed a lowpass type of filter. This filter is called the moving-average (MA) filter because it can be interpreted as adding up the previous
worth of the input-signal values and multiplying that result by
, which could be set to
to form an average,

What is the step response for the MA filter? The step-function input signal is shown in Figure 6. We need to convolve it with the MA impulse response, which we can do with the aid of the plots shown in Figures 7 and 8,

For , we have
. For
and
we have
and for we have
The full step response is shown in Figure 9. Try to interpret it in terms of a running, or ‘moving,’ average operation on the step-function input.
Ideal Highpass Filters
A highpass filter (HPF) is one that passes (allows passage through the system with non-zero amplitude) only sine waves with relatively large (high) input frequencies and rejects (assigns a zero or very small amplitude to) all other frequencies. In the frequency domain, we have the transfer function shown in Figure 10(a), and we see that it is the complement of the ideal LPF transfer function: where one is zero, the other is not, where one is non-zero, the other is zero.
The HPF cutoff frequency is , the passband gain is
, the filter bandwidth (width of the passband) is infinite, and the stopband is
.
The HPF transfer function can be written in terms of the LPF transfer function as

We can use (18) to easily derive the ideal HPF impulse-response function,
which is shown in Figure 11.

Not surprisingly, the ideal HPF is also not causal. We could introduce a delay to create a good approximation to the ideal HPF, as we did for the ideal LPF.
Ideal Bandpass Filters
A bandpass filter (BPF) is one that passes only those input frequency components that fall inside a specific band of frequencies, such as . An ideal BPF has rectangular passbands, as shown in Figure 12(a).
The transfer function of an ideal BPF can be expressed in terms of just a single ideal LPF transfer function or by combining an ideal LPF transfer function with an ideal HPF transfer function. Let’s go through those two options next.

Ideal BPF in Terms of Ideal LPF
Recall that the Fourier transform of the ‘modulated’ signal is the shifted transform of the signal
itself
We can form the ideal BPF transfer function by adding together two frequency-shifted ideal LPF transfer functions, as shown in Figure 13,
The impulse response is therefore given by

Ideal BPF in Terms of an Ideal LPF and an Ideal HPF
To form the ideal BPF, we multiply the transfer functions for a particular LPF and HPF, as shown in Figure 14. The cutoff frequencies for the two component filters are such that when the transfer functions are multiplied, only two frequency intervals of the result correspond to nonzero values of frequency response. The frequency response is simply
Using the convolution theorem, we can express the corresponding impulse response as the convolution of the impulse responses for the two component filters,
where we used the previous results instead of directly evaluating the formula (thankfully!).

Ideal Bandstop Filters
An ideal bandstop filter (BSF) is one that rejects frequency components in some band of frequencies centered at with width
, and passes all others–the complement of the BPF. When the passband width
is very small compared to the center frequency
, this filter is also called an ideal notch filter.

Using similar analysis methods as for the previous ideal filter types, we can write expressions for the ideal BSF transfer function and impulse response as
Ideal Allpass Filters
Finally, the ideal allpass filter (APF) passes all frequency components in the input with a common change to their amplitudes and no change to their phases, as shown in Figure 16.

The impulse response is just an impulse,
A slightly more general APF includes a delay, so that the impulse response function is
Representation in Terms of a Prototype LPF
To drive home the relationships between the various kinds of ideal filters we’ve defined and analyzed in this post, let’s unify them by representing them in terms of a single frequency-response function: a prototype lowpass filter. The prototype LPF is shown in Figure 17. It is merely a rectangle with unit height and unit width. Let’s see how we can use this single function to represent all the different ideal filters (except the APF).

An ideal LPF with cutoff frequency and unity gain is a frequency-scaled version of our prototype filter
,
If the gain of the LPF is , we simply multiply the frequency-scaled prototype by
,
An ideal highpass filter with cutoff is just a constant minus the ideal lowpass filter,
Similarly, any ideal BPF is expressed as
and any ideal bandstop filter is
Let , which is the width of the LPF, the widths of the passbands for the BPF, and the width of the stopband for the HPF. Then we can construct the following table of transfer functions in terms of
Filter Name | Transfer Function |
Ideal LPF | |
Ideal HPF | |
Ideal BPF | |
Ideal BSF |
Examples
Clock Extraction
Suppose we have a periodic waveform , with period
, and we want to extract from it a sinusoidal clocking signal for use in periodic switching of some piece of electronic equipment. One way to extract the desired sine wave is to use an ideal filter. Let’s see how.
The generic periodic signal is shown in Figure 18.

First we want to examine the periodic signal in the frequency domain. This will help us visualize the transfer function of an ideal filter that will extract the desired spectral component of . That is, we’ll be able to see which spectral components we want to retain (pass through our filter) and which we want to discard (reject by our filter). Keep in mind that we can design a complete ideal filter by simply drawing rectangles and step functions.
Any periodic signal can be represented as a Fourier series, so we can immediately write an expression for ,
from which it is easy to compute the Fourier transform of , since we know the Fourier transform of each of the complex sine-wave components in (43) is an impulse,
A graph of the Fourier transform (45) is shown in Figure 19. The Fourier-series coefficients are generally complex-valued, but in the figure we represent them as real numbers, so sometimes the impulses are drawn downwards.

The desired sinusoidal component has frequency equal to the fundamental frequency in the Fourier series, . To get a real-valued sine-wave at the output of our filter, we’ll need to pass the components with frequencies
. Our ideal filter must not pass any other frequency components, which places a constraint on its passband width. In particular, it cannot be wider than
. Figure 20 illustrates the placement of the desired ideal bandpass filter transfer function.

The time-domain output of the filter is the convolution of the input with the impulse response, but it is preferable to work with the frequency-domain expressions at first. In that case, the output of the filter is the multiplication of the Fourier transform of the input with the transfer function,
Assuming is real-valued, we know that
. Inverse Fourier transforming
leads to
where .
Sonar Filtering
Suppose we want to receive and process a reflected sonar waveform. One problem is that the processing equipment and the environment add noise and potentially other signals to the desired signal. Our range and velocity estimates, obtained from the received waveform, are degraded in accuracy if excess noise is processed along with the reflected signal. So we would like to filter the received data to reject as much of the noise and interference as possible while maintaining the integrity and energy of the desired sonar waveform.
The situation is illustrated in Figure 21.

An ideal bandpass filter with center frequency and bandwidth
will pass all of the desired signal energy and only the noise and interference and lies within the signal’s band of frequencies.

Equalization and Distortion
When a signal experiences a linear time-invariant system that is undesired and that adversely affects the signal properties from the point of view of the intended use of the signal, we say it has experienced linear distortion, or just distortion. In general, distortion occurs when the system output is not simply a scaled and time-shifted version of the input. That is, distortionless filtering corresponds to
When (60) is violated, distortion is present, we can consider designing a second linear time-invariant system that undoes the distortion. This operation is commonly called equalization. The situation is shown in Figure 23, which shows the serial connection of two LTI systems, one that represents the distortion and the next that represents the equalizer
.

How should we design the equalizer, which is represented in the time domain by its impulse response and in the frequency domain by the transfer function
?
We know that the serial connection of the two LTI systems is equivalent to an LTI system that has transfer function equal to the product of the two individual transfer functions,
If the equivalent system is an allpass filter with linear phase, then the distortion is removed, because the overall effect is simply a scaling and a delay. For simplicity, let’s assume that the equivalent system has gain and zero delay, so that it has impulse response
and frequency response
. Then our equalizer design equation is
which appears to be much easier to solve than the corresponding time-domain expression
We have
provided that .
A practical approach to implementing (64) is to approximate by a set of ideal filters spaced evenly throughout the band of frequencies occupied by a typical input signal. Let’s illustrate this with pictures.
In Figure 24 we show the magnitude of for some generic equalization-problem setting.

We want to approximate by a piecewise-constant function. Each piece of the function will represent a different ideal filter. This approximation is illustrated in Figure 25.

Let each frequency interval have width and center frequency
, where
is the number of elements in the piecewise-constant approximation–the number of subbands that we will adjust.
An individual filter can be expressed mathematically as
The phase is typically important as well, and we can simply specify the phase of the ideal filter as the negative of the phase of the distorting filter at the center point of the filter’s passband. This phase-sensitive version of the equalizing filter is given by
Can this approximation lead to adequate equalization? How small does need to be? Equivalently, how large does
need to be?
When the equalization is done by a human operator for audio signals, the equalizer is the familiar graphic equalizer found in many home and professional audio systems. In RF communication systems, the equalization must be done rapidly and automatically, and this represents a substantial area of active research and system design in electrical engineering.
Multipath Channels
When transmitting a radio wave over a wireless channel, the propagation of the wave cannot be completely controlled. It may be reflected, diffracted, refracted, etc. on the way to being received at the intended system. A common situation involves multiple reflected waves, which results in the sum of several scaled and delayed versions of the wave arriving at the intended system receiver. This is called multipath propagation, and is illustrated in Figure 26.

Let the transmitted signal be denoted by and the received signal by
. From the geometry of the propagation situation, all reflected waves will travel over longer spatial paths than a direct wave, and so they each experience larger propagation delays relative to the direct wave. If the reflections are ideal (no amplitude or phase shift is experienced), then each propagation path results in a delayed version of the input signal
. A better model includes the possibility of the reflections changing the amplitude and/or phase of the signal, so that the propagation along the
th path results in
, which we recognize as the output of an allpass filter with linear phase.
The received signal is then the sum of all of the reflections and the direct path, if any, which can be written as
In (67), each element of the sum is called a multipath ray, or just ray, and the number of rays is . Many multipath propagation channels can be modeled with
as small as two or three.
Notice that each ray can be expressed as the convolution of the input signal with the impulse response of an APF,
This implies that the total received signal is the convolution of the input signal
with the sum of the individual-ray APF impulse-response functions,
where the multipath channel impulse response is given by
A multipath channel is often characterized by the gross properties of the delays . In particular, the typical (expected) range of delays is referred to as the delay spread, and plays a key role in determining whether the channel will adversely affect a particular communication signal. The delay spread is illustrated in Figure 27.

Let’s look at a two-ray multipath channel to get a feeling for multipath’s effects on a communication signal. Our simple model is
where we assume . This model simplifies to
where .
The squared magnitude of this transfer function is
For further simplicity, let . Then
Let’s plot (76) for equal to 1 millisecond in Figure 28.

The logarithmic plot of the transfer-function magnitude in Figure 28 reveals that the transfer function contains a zero–a frequency for which is zero, and it is also small for nearby frequencies. Since the received signal is the multiplication of the multipath-channel transfer function by the Fourier transform of the input signal, any frequency components of the input signal near this zero are severely attenuated, and the signal is distorted. These zeros in propagation-channel transfer functions are often referred to as nulls or fades.
The multipath-channel analysis allows us to identify a fundamental tradeoff between signal bandwidth and channel delay spread. Consider the signal whose Fourier transform is shown as the green line in Figure 28. This signal has no significant energy for frequencies near the fade, and so the multipath channel does not adversely affect it. However, the red signal has much wider bandwidth (its Fourier transform extends further from zero in frequency), and has significant energy in the vicinity of the fade. The red signal will therefore experience distortion from the multipath fade.
In general, to avoid multipath distortion, the input-signal bandwidth needs to be smaller than the reciprocal of the delay spread. Since narrow bandwidth corresponds to long time duration, this can be interpreted as saying that the input signal requires sending long symbols, where long is with reference to the delay spread. If the temporal length of the symbols is much longer than the delay spread, then each symbol is only affected in a minor way. If the symbols are short in time (large signal bandwidth), then the delays in the channel will cause adjacent symbols to interfere with each other, in turn causing the receiver to make symbol errors.
We take a first look at non-ideal filters in this SPTK post on linear time-invariant systems that are described by simple first- and second-order differential equations.
Significance of Ideal Filters in CSP
Ideal filters are sometimes used in digital signal processing because it is easy to create the discrete-time/discrete-frequency transfer functions (rectangles!) and easy and cheap to apply them in the frequency domain using the FFT and multiplication (convolution theorem).
The moving-average filter is typically used in the frequency-smoothing method of spectral correlation function estimation due to its simplicity, effectiveness, and low cost. We’ll look at the moving-average filter next time in the SPTK series.
Previous SPTK Post: Convolution Next SPTK Post: The Moving-Average Filter