SPTK: The Z Transform

I think of the Z transform as the Laplace transform for discrete-time signals and systems.

Previous SPTK Post: Practical Filters Next SPTK Post: Digital Filters

In this Signal Processing ToolKit post, we look at the discrete-time version of the Laplace Transform: The Z Transform.

Jump straight to the Significance of the Z Transform in CSP.

From the sampling theorem, we know that we can focus on regularly spaced samples of any bandlimited continuous-time signal x(t) and we will not lose any information about x(t) in doing so, provided we sample often enough. The impulse-sampled signal y(t), given by

\displaystyle y(t) = x(t) \sum_{k=-\infty}^\infty \delta(t-kT_s) \hfill (1)

\displaystyle = \sum_{k=-\infty}^\infty x(kT_s) \delta(t-kT_s), \hfill (2)

is therefore equivalent, in an information sense, to x(t) itself, and since this signal can be constructed from the set of samples, \{x(kT_s)\}_{k=-\infty}^\infty, the samples themselves are sufficient to describe the analog signal.

We can ask about the properties of y(t). What does this signal look like through the lens of the Fourier and Laplace transforms? Let’s focus on the more general Laplace transform and apply it to y(t). Straightforward application of the Laplace transform to (2) yields, due to the linearity of the transform,

\displaystyle Y(s) = {\cal{L}}\left[y(t)\right] = {\cal{L}}\left[ \sum_{k=-\infty}^\infty x(kT_s) \delta(t - k T_s) \right] \hfill (3)

\displaystyle = \sum_{k=-\infty}^\infty x(kT_s) {\cal{L}} \left[ \delta(t - kT_s) \right] \hfill (4)

We learned in the SPTK Laplace Transform post that \displaystyle {\cal{L}} \left[ \delta(t) \right] = 1. What about the closely related transform \displaystyle {\cal{L}}\left[\delta(t-t_0)\right]? Let’s tackle it directly,

\displaystyle \int_{-\infty}^\infty \delta(t-t_0) e^{-st} \, dt = \int_{-\infty}^\infty \delta(t-t_0) e^{-st_0} \, dt = e^{-st_0}. \hfill (5)

So our Laplace transform Y(s) becomes

\displaystyle Y(s) = \sum_{k=-\infty}^\infty x(kT_s) e^{skT_s}, \hfill (6)

which doesn’t look too helpful, actually. But let’s define our way out of confusion or trouble by introducing a new variable z,

\displaystyle z = e^{sT_s} = e^{(\sigma + i2\pi f) T_s} \hfill (7)

which, when used in our expression for Y(s) leads to Y(z),

\displaystyle Y(s) = \sum_{k=-\infty}^\infty x(kT_s) \left( e^{sT_s}\right)^{-k} \hfill (8)

\displaystyle = \sum_{k=-\infty}^\infty x(kT_s) z^{-k} = Y(z) \hfill (9)

That’s the Z transform.

Recall that a major preoccupation in the Laplace transform world is the convergence of the transform, which depends on the value of the real part of s = \sigma + i2\pi f, which is \sigma. Laplace transforms convert (exist as regular functions) or diverge (do not exist) based on the value of \sigma–they exist in half-planes of the s-plane for some value of \sigma such as \sigma = \sigma_*. Sometimes \sigma_* \rightarrow \infty, which means the transform exists for all values of s.

For the Z transform, we can predict that the regions of convergence will consist of the interiors of circles in the z-plane centered at the origin. This is because the mapping z = e^{\sigma + i 2 \pi f} maps half-planes in the s plane to circles centered at the origin in the z plane. Let’s look at that in some detail. The mapping is

\displaystyle z = e^{sT_s} = e^{(\sigma + i 2 \pi f)T_s} = e^{\sigma T_s} e^{i2\pi f T_s} \hfill (10)

which means that

\displaystyle |z| = e^{\sigma T_s} \hfill (11)

\displaystyle \angle z = i2\pi f T_s \hfill (12)

Keep in mind that in the s-plane, which corresponds to the Laplace transform, the real part of s is \sigma and the imaginary part is 2\pi f. So the plane is determined by \sigma along the x-axis and f along the y-axis.

Now the function e^x is a monotonically increasing function on x\in (-\infty, \infty), and \displaystyle \lim_{x\rightarrow -\infty} e^x = 0 and \displaystyle \lim_{x\rightarrow \infty} e^x \rightarrow \infty. Also, \displaystyle e^0 = 1. Therefore, e^x < 1 for x < 0 and e^x > 1 for x > 0. Just reviewing here.

Now consider all the points in the s-plane for \sigma = 0, which is s = i2\pi f, or the y-axis. (This is an important part of the s-plane because it connects the Fourier transform to the Laplace transform.) For this set of points in the s-plane, the z variable is

z = e^0 e^{i2 \pi f T_s} = e^{i 2 \pi f T_s} \hfill (13)

and since f can be any real number, in this case z is a unit-magnitude number with any phase–a number on the unit circle in the z-plane. So we see that the entire frequency axis in the s-plane maps to the unit circle in the z-plane.

When \sigma = \sigma_- < 0, then

\displaystyle z = e^{\sigma_- T_s} e^{i2\pi f T_s} \hfill (14)

which is a set of points on a circle with radius e^{-|\sigma_-|T_s} < 1 and centered at the origin. Similarly, for \sigma = \sigma_+ > 0, the s-plane points on the vertical line \{(\sigma_+, f)\} map to a z-plane circle with radius e^{\sigma_+ T_s} > 1 and centered at the origin.

So consider the s-plane half-plane s\in \{(\sigma < \sigma_-, f)\}. These points map to z = e^{\sigma T_s} e^{i 2\pi f T_s} with \sigma < \sigma_-, which are points in the interior of the circle with radius e^{-|\sigma_-|T_s} and centered at the origin (consider what happens as \sigma \rightarrow -\infty).

Similarly, for the half-plane s \in \{(\sigma > \sigma_+, f)\}, the s values map to the points outside of the z-plane circle with radius e^{\sigma_+ T_s} (consider what happens as \sigma \rightarrow +\infty).

Our intuition might be, then, that regions of convergence for the Z-transform sum (9) consist of regions inside or outside of circles centered at the origin in the z plane. Let’s find out.

First though, we note that just as in the case of the Laplace transform, there are two kinds of Z transform: one-sided and two-sided. The two-sided transform is (9). The one-sided transform, which is appropriate for causal signals and systems, is simply

\displaystyle {\cal{Z}}\left[x(kT_s)\right] = X(z) = \sum_{k=0}^\infty x(kT_s) z^{-k} \hfill (15)

and it is this Z transform that will be our focus.

Examples of the Z Transform

Z Transform of an Impulse

Here the signal of interest is a discrete-time impulse, or Kronecker delta function, defined by

\displaystyle x(kT_s) = \left\{ \begin{array}{ll} 1, & k = 0, \\ 0, & \mbox{\rm otherwise} \end{array} \right. \hfill(16)

Figure 1. A discrete-time impulse function.

The (one-sided) Z transform is simple to evaluate in this case (and easy to guess)

\displaystyle {\cal{Z}}\left[x(kT_s)\right] = \sum_{k=0}^\infty x(kT_s) z^{-k} = x(0) z^{-0} = 1 \hfill (17)

which is clearly valid for all values of z, so the transform exists over the entire z-plane.

Z Transform of a Unit-Step Function

Here our discrete-time signal is defined as

\displaystyle x(kT_s) = \left\{ \begin{array}{ll} 1, & k \ge 0 \\ 0, & k < 0 \end{array} \right. \hfill (18)

as illustrated in Figure 2.

Figure 2. The discrete-time unit-step function.

Applying the Z-transform definition leads to an infinite (geometric) sum,

\displaystyle {\cal{Z}}\left[x(kT_s)\right] = \sum_{k=0}^\infty (1) z^{-k} = \sum_{k=0}^\infty z^{-k}. \hfill (19)

Recall the finite geometric series

\displaystyle S = \sum_{k=0}^{N-1} a^k = S_a(N) = \frac{1-a^N}{1-a} \ \ \ (a \neq 1) \hfill (20)

(which you can prove by subtracting S and aS and canceling almost all terms). So the Z transform of the unit-step function is

\displaystyle {\cal{Z}} \left[u(kT_s)\right] = U(z) = \lim_{N\rightarrow\infty} \left[ \frac{1 - {z^{-1}}^N}{1 - z^{-1}} \right]. \hfill (21)

Under what conditions on z = e^{sT_s} = e^{(\sigma + i2\pi f)T_s} will this infinite sum converge? We already know that S_a(N) converges to 1/(1-a) under the condition that |a| < 1. Therefore we require

\displaystyle |z^{-1}| = e^{-\sigma T_s} < 1 \hfill (22)

or \sigma > 0. Perhaps more straightforwardly, |z^{-1}| < 1 \Rightarrow |z| > 1, which is illustrated in Figure 3. The final result is then

\displaystyle U(z) = \frac{1}{1 - 1/z} \ \ (|z| < 1). \hfill (23)

Figure 3. Region of convergence for the Z transform of a discrete-time unit-step function (shaded area).

It is worth recalling the prior two transforms we obtained for a unit-step function. These are (14) and (15) in the Laplace-transform post, reproduced here for fun:

\displaystyle {\cal{F}}\left[u(t)\right] = U(f) = \frac{1}{2}\delta(f) + \frac{1}{i2\pi f}

\displaystyle {\cal{L}}\left[u(t)\right] = U(s) = \frac{1}{s}, \ \ \ \Re(s) = \sigma > 0.

Satisfyingly, the Z transform of a unit-step function is more similar to the Laplace transform of a unit-step than to the Fourier transform, and is a simple rational function of z.

Z Transform of a Decaying Exponential

In the previous SPTK post on practical filters, we noted that several of the derived impulse-response functions took the form of decaying real-valued exponentials, or the impulse-response function involved a modulated exponential or an exponential combined with some other simple functions. Here then, when we look at the Z transform of a decaying real-valued exponential, we are building up to looking at transforms of impulse-response functions for discrete-time systems.

The function of interest is defined by a parameter a as

\displaystyle x(kT_s) = e^{-akT_s}u(kT_s) \ \ (a > 0) \hfill (24)

which is nothing more than a sampled causal continuous-time decaying exponential.

Figure 4. A real-valued decaying exponential as in (24).

Applying the definition of the Z transform yields the following sequence of equations,

\displaystyle X(z) = {\cal{Z}}\left[x(kT_s)\right] = \sum_{k=0}^\infty e^{-a k T_s} z^{-k} \hfill (25)

\displaystyle = \sum_{k=0}^\infty \left[ e^{-a T_s} z^{-1} \right]^k \hfill (26)

\displaystyle = \sum_{k=0}^\infty b^k \ \ \ (b = e^{-aT_s}z^{-1}) \hfill (27)

\displaystyle = \frac{1}{1-b}, \ \ \ |b| < 1 \hfill (28)

\displaystyle = \frac{1}{a-e^{-a T_s}z^{-1}}, \ \ \ \left| e^{-aT_s}z^{-1} \right| < 1 \hfill (29)

or, reexpressing the convergence condition,

\displaystyle X(z) = \frac{1}{1 - e^{-aT_s}z^{-1}}, \ \ \ |z| > e^{-aT_s}. \hfill (30)

Since a > 0 here, e^{-aT_s} < 1 and the region of convergence includes the unit circle, as illustrated in Figure 5.

Figure 5. The region of convergence for the Z transform of a decaying real exponential (shaded region).

Z Transform of a Complex Exponential

Unlike the real-valued exponential, which either tends to zero or blows up as time tends to infinity, the complex-valued exponential is a periodic function. In particular, as we’ve seen many times, the complex exponential is the sum of two real-valued sine waves in phase quadrature,

\displaystyle e^{i 2 \pi f t} = \cos(2\pi f t) + i\sin(2\pi f t). \hfill (31)

In our current discrete-time setting, we have the following sampled complex exponential,

\displaystyle x(kT_s) = e^{i 2 \pi f_0 (kT_s)}u(kT_s) \hfill (32)

is our sine wave, and we can easily apply the one-sided Z transform here, making use of the infinite geometric series, as before. Skipping the step-by-step application of the definition yields a result similar to that for the real-valued exponential,

\displaystyle X(z) = {\cal{Z}}\left[x(kT_s)\right] = \sum_{k=0}^\infty e^{i 2 \pi f_o(kT_s)} z^{-k} \hfill (33)

\displaystyle = \frac{1}{1-e^{i2\pi f_0 T_s}z^{-1}}, \ \ \ |z| < 1. \hfill (34)

The region of convergence is the same as for the unit-step function, which is shown graphically in Figure 3. Why would it make sense that they are the same? Hint: One is a special case of the other.

The defining sum for the Z transform is a linear operation, which implies that the Z transform for the sum of signals is the sum of the Z transforms,

\displaystyle {\cal{Z}}\left[ a_1 x_1(kT_s) + a_2x_2(kT_s)\right] = {\cal{Z}}\left[a_1x_1(kT_s)\right] + {\cal{Z}} \left[a_2 x_2(kT_s)\right]  \hfill (35)

\displaystyle = a_1X_1(z) + z_2X_2(z). \hfill (36)

We can use this linearity property to easily compute the Z transforms for real-valued sine waves.

Z Transform of a Sine Wave

We can use Euler’s formula (31) to express \sin(\cdot) in terms of complex exponentials,

\displaystyle x(kT_s) = \sin(2\pi f_0 kT_s) = \frac{1}{2i} \left[ e^{i2\pi f_0 kT_s} - e^{-i2\pi f_0 kT_s} \right] \hfill (37)

\displaystyle = \frac{-i}{2} \left[e^{i2\pi f_0 kT_s} - e^{i2\pi (-f_0)kT_s} \right] \hfill (38)

Now we can use the Z transform result for a complex exponential together with linearity and some algebra to yield

\displaystyle X(z) = \frac{z^{-1}\sin(2\pi f_0 T_s)}{1 - 2z^{-1}\cos(2\pi f_0 T_s) + z^{-2}} \ \ \ |z| > 1. \hfill (39)

The Z Transform and Linear Shift-Invariant Discrete-Time Systems [Preview]

Suppose we reconsider our first-order continuous-time system from the practical filters post, which has input and output signals related by the differential equation

\displaystyle \tau y^\prime(t) + y(t) = x(t). \hfill (40)

We can examine this system in discrete time by simply sampling it every T_s seconds, which yields

\displaystyle \tau y^\prime (kT_s) + y(kT_s) = x(kT_s). \hfill (41)

Now, if the sampling increment T_s (reciprocal of the more familiar sampling rate, f_s = 1/T_s) is small enough, the derivative y^\prime(kT_s) is well-approximated by a difference

\displaystyle y^\prime (kT_s) \approx \frac{y(kT_s) - y((k-1)T_s)}{T_s} \hfill (42)

and the differential equation (40) becomes a difference equation,

\displaystyle \frac{\tau}{T_s} y(kT_s) - \frac{\tau}{T_s}y(kT_s-T_s) + y(kT_s) = x(kT_s) \hfill (43)

\displaystyle \frac{\tau}{T_s} y(kT_s-T_s) + \left[ 1 + \frac{\tau}{T_s} \right] y(kT_s) = x(kT_s) \hfill (44)

\displaystyle \frac{\tau/T_s}{1 + \tau/T_s} y(kT_s-T_s) + y(kT_s) = \frac{1}{1 + \tau/T_s} x(kT_s). \hfill (45)

Since the factor multiplying the input x(kT_s) will just scale the system response (that is, we can just consider the new input b(kT_s) = \frac{1}{1 + \tau/T_s} x(kT_s)), we can consider the generic first-order difference equation defined by a single constant K,

\displaystyle Ky(kT_s-T_s) + y(kT_s) = x(kT_s) \hfill (46)

Let’s analyze this discrete-time system with our Z-transform knowledge. Since the difference equation holds for all time, we’ll use the two-sided transform for convenience,

\displaystyle {\cal{Z}} \left[Ky(kT_s-T_s) + y(kT_s) \right] = {\cal{Z}}\left[x(kT_s)\right] \hfill (47)

\displaystyle K {\cal{Z}} \left[ y(kT_s-T_s) \right] + {\cal{Z}} \left[ y(kT_s) \right] = {\cal{Z}} \left[ x(kT_s \right] \hfill (48)

\displaystyle K {\cal{Z}} \left[ y(kT_s - T_s)\right] + Y(z) = X(z) \hfill (49)

\displaystyle K \sum_{k=-\infty}^\infty y((k-1)T_s) z^{-k} + Y(z) = X(z). \hfill (50)

Let k_0 = k-1 and switch variables in the sum to yield

\displaystyle Kz^{-1}Y(z) + Y(z) = X(z) \hfill (51)

or

\displaystyle \frac{Y(z)}{X(z)} = \frac{1}{1 + Kz^{-1}}. \hfill (52)

Is this the discrete-time Z-transform-based analog to the transfer function for a continuous-time linear time-invariant system? If so, we expect that the output for a sine wave input x(kT_s) = e^{i 2 \pi f_0 kT_s} is y(kT_s) = H(f_0) x(kT_s), where H(\cdot) is related to Y(z)/X(z) in (52). We can check this directly,

\displaystyle y(kT_s) + Ky(kT_s-T_s) = x(kT_s) \hfill (53)

\displaystyle \Rightarrow H(f_0)e^{i2\pi f_0 kT_s} + KH(f_0) e^{i 2\pi f_0(k-1)T_s} = e^{i 2\pi f_0 k T_s} \hfill (54)

\displaystyle H(f_0) \left(1 + Ke^{-i2\pi f_0 T_s}\right) = 1 \hfill (55)

\displaystyle \left( \left. H(z)\right|_{z=e^{i2\pi f_0 T_s}} \right) \left(1 + Ke^{-i2\pi f_0 T_s}\right) = 1 \hfill (56)

This implies that indeed H(z) = Y(z)/X(z) is the frequency response of the system when z is constrained to lie on the unit circle, z = e^{i2\pi f_0 T_s}. We will investigate this more closely when we look at the relationship between the Z transform and convolution, because we already know that discrete-time convolution relates the input and output of linear shift-invariant systems.

The Z Transform and Signal-Processing Operations

In this section we look at how different mathematical operations on signals are reflected in their Z transforms. Of particular interest to us as signal processors aiming at using and understanding the statistics of communication signals are elementary signal-processing operations such as delaying, scaling, multiplying, and convolving.

Delay

Let the delayed signal be y(kT_s) = x(kT_s - DT_s), where D \ge 0 and x(kT_s) = x(kT_s)u(kT_s). The delay operation is illustrated in Figure 6 for D = 2.

Since x(kT_s) is causal and D \ge 0, then y(kT_s) is causal. The Z transform for y(kT_s) follows easily

\displaystyle Y(z) = {\cal{Z}}\left[y(kT_s)\right] = {\cal{Z}}\left[x(kT_s - DT_s)\right] \hfill (57)

\displaystyle = \sum_{k=D}^\infty x((k-D)T_s) z^{-k} \hfill (58)

\displaystyle = \sum_{k=0}^\infty x(kT_s) z^{-(k+D)} \hfill (59)

\displaystyle = z^{-D} \sum_{k=0}^\infty x(kT_s) z^{-k} \hfill (60)

\Rightarrow Y(z) = z^{-D} X(z) \ \ \ (D \ge 0). \hfill (61)

Figure 6. A discrete-time signal x(kT_s) and its delayed version y(kT_s).

If D=1, then Y(z) = z^{-1}X(z). That is, if a signal is delayed by one sample, then the Z transform of the delayed signal is just z^{-1} times the Z transform of the original signal. This is why you’ll see a delay in a signal-processing block diagram show delay elements as boxes with z^{-D} in them–this is just a short-hand way of saying ‘this block delays its input by D samples. We’ll see some relevant-to-CSP examples at the bottom of the post.

Advance

Here the signal x(kT_s) is moved to the left on the time axis, rather than to the right, so D \ge 0 in

\displaystyle y(kT_s) = x(kT_s + DT_s) \ \ \ D \ge 0 \hfill (62)

Proceeding carefully, we have the following sequence of equations

\displaystyle {\cal{Z}}\left[y(kT_s)\right] = {\cal{Z}}\left[x((k+D)T_s\right] \hfill (63)

\displaystyle = \sum_{k=0}^\infty x((k+D)T_s) z^{-k} \hfill (64)

\displaystyle = \sum_{k_0=D}^{\infty} x(k_0 T_s) z^{-(k_0 - D)} \hfill (65)

\displaystyle = z^{D} \left[ \sum_{k=D}^\infty x(kT_s)z^{-k} \right] \hfill (66)

\displaystyle = z^D \left[ \sum_{k=0}^{D-1} x(kT_s) z^{-k} - \sum_{k=0}^{D-1} x(kT_s)z^{-k} + \sum_{k=D}^\infty x(kT_s)z^{-k} \right] \hfill (67)

\displaystyle \Rightarrow Y(z) = z^D X(z) - z^D \left[\sum_{k=0}^{D-1} x(kT_s) z^{-k} \right]. \hfill (68)

Scaling

Since the Z transform is a sum involving x(kT_s) multiplied by powers of z, if we scale x(kT_s), we simply scale X(z). This is simply a consequence of the linearity of the sum,

\displaystyle x(kT_s) \Longleftrightarrow X(z) \Rightarrow Ax(kT_s) \Longleftrightarrow AX(z). \hfill (69)

Convolution

We’ve already established that discrete-time linear shift-invariant systems (the discrete-time analog to the usual continuous-time linear time-invariant systems) are characterized by their impulse-response and transfer functions, such that the output y(k) is the convolution of the input x(k) and the impulse response h(k). The transfer function H(f), or frequency response, is the discrete-time Fourier transform of h(k).

So convolution is central to linear systems, and the Z transform is a powerful analysis tool for discrete-time systems–more powerful that the discrete Fourier transform–so it will pay us to look at the connection between the Z transform and convolution.

Let’s consider the convolution of two arbitrary discrete-time sequences x(kT_s) and w(kT_s),

\displaystyle y(kT_s) = \sum_{j=-\infty}^\infty x(jT_s) w((k-j)T_s) \hfill (70)

What is the Z transform of y(kT_s)? Let’s use the two-sided Z transform to find out.

\displaystyle Y(z) = {\cal{Z}}_2 \left[y(kT_s)\right] = {\cal{Z}}_2\left[ \sum_{j=-\infty}^\infty x(jT_s) x((k-j)T_s) \right] \hfill (71)

\displaystyle = \sum_{k=-\infty}^\infty \left[ \sum_{j=-\infty}^\infty x(jT_s) x((k-j)T_s) \right] z^{-k} \hfill (72)

\displaystyle = \sum_{j=-\infty}^\infty x(jT_s) \left[ \sum_{k=-\infty}^\infty w((k-j)T_s)z^{-k}\right] \hfill (73)

\displaystyle \sum_{j=-\infty}^\infty \left[ \sum_{k_0=-\infty}^\infty w(k_0T_s) z^{-(k_0 + j)} \right] \hfill (74)

\displaystyle = \underbrace{\left[ \sum_{j=-\infty}^\infty x(jT_s)z^{-j}\right]}_{X(z)} \underbrace{\left[ \sum_{k=-\infty}^\infty w(kT_s) z^{-k} \right]}_{W(z)} \hfill (75)

\displaystyle \Rightarrow Y(z) = X(z)W(z) \hfill (76)

So, once again, the transform of a convolution is the product of the individual transforms. Converting convolutions into simpler products is a key reason to study and use transforms of various types.

Can you prove that the result holds for causal signals and the one-sided transform?

The Z Transform, Frequency Response, and the FFT

For a shift-invariant linear system, the system output corresponding to a input that is a impulse, or delta, function occurring at time k=0 is called the impulse response function,

\displaystyle \delta(kT_s) \mapsto h(kT_s) \hfill (77)

Since the system is shift-invariant,

\displaystyle \delta((k-k_0)T_s) \mapsto h((k-k_0)T_s)\hfill (78)

and we have already seen that we can represent any discrete-time signal as the sum of weighted shifted impulses,

\displaystyle x(kT_s) = \sum_{j=-\infty}^\infty x(jT_s)\delta(kT_s - jT_s) \hfill (79)

Combining all these facts, we can see that the output for any input x(kT_s) is just the convolution

\displaystyle x(kT_s) \mapsto \sum_{j=-\infty}^\infty x(jT_s) h((k-j)T_s) = y(kT_s) \hfill (80)

and by the convolution result,

Y(z) = H(z) X(z) \hfill (81)

Suppose the input is a causal complex sine wave x(kT_s) = e^{i 2 \pi f_0 kT_s}u(kT_s). Then

\displaystyle e^{i 2 \pi f_0 kT_s}u(kT_s) \Longleftrightarrow \frac{1}{1 - e^{i2 \pi f_0 T_s}z^{-1}} \hfill (82)

Therefore the output transform for this sine-wave input is

\displaystyle Y(z) = \left[ \frac{1}{1-e^{i2 \pi f_0 T_s}z^{-1}} \right] H(z) \hfill (83)

For our first-order system,

\displaystyle H(z) = \frac{1}{1 + Kz^{-1}} \hfill (84)

and it can be shown that (but not here)

\displaystyle y(kT_s) = \frac{e^{i2\pi f_0 k T_s}}{1+Ke^{-i2\pi f_0 T_s}} = e^{i2\pi f_0 k T_s} \left. H(z)\right|_{z=e^{i 2 \pi f_0 T_s}} \ \ \ (\mbox{\rm large}\ k) \hfill (85)

Therefore, the output for a sine-wave input is just a scaled version of that sine-wave input.

In general, the frequency response of a linear discrete-time shift-invariant system is given by the Z transform of the impulse response evaluated at the frequency of interest,

\displaystyle {\mbox{\rm Frequency\ Response}} = \left. H(z) \right|_{z=e^{i2\pi f T_s}} \hfill (86)

which requires that the Z transform H(z) has a convergence region that includes the unit circle in the z plane: The frequency response is H(z) evaluated on the unit circle. This is analogous to the continuous-time case in which the frequency response is equal to the Laplace transform H(s) evaluated on the f axis, provided it exists there.

Let’s take a look at what evaluating the Z transform on the unit circle means in terms of operations we’ve already encountered.

\displaystyle \left. X(z) \right|_{z=e^{i2\pi f T_s}} = \left[\sum_{k=-\infty}^\infty x(kT_s)z^{-k} \right]_{z=e^{i2\pi fT_s}} \hfill (87)

\displaystyle = \sum_{k=-\infty}^\infty x(kT_s)e^{-i2\pi f k T_s}, \hfill (88)

which is the discrete-time continuous-frequency Fourier transform of x(kT_s). Now suppose that x(kT_s) is non-zero only for 0 \leq k \leq K-1. We then have

\displaystyle \left. X(z) \right|_{z=e^{i2\pi f T_s}} = \sum_{k=0}^{K-1} x(kT_s) e^{-i2\pi f k T_s}. \hfill (89)

Now let’s sample the function at K equispaced frequencies corresponding to one trip around the unit circle. For example, f=j/(KT_s), j=0, 1, \ldots, K-1. This leads to the following function of frequency ‘bin’ j,

\displaystyle X(j) = \sum_{k=0}^{K-1} x(kT_s) e^{-i2\pi (j/KT_s)(kT_s)} \ \ \ j=0, 1, \ldots, K-1 \hfill (90)

\displaystyle = \sum_{k=0}^{K-1} x(kT_s) e^{-i2\pi jk/K}, \ \ \ j=0, 1, \ldots, K-1 \hfill (91)

which is simply the discrete Fourier transform. So the frequency response for a discrete-time linear shift-invariant system is intimately related to the discrete Fourier transform, which is efficiently computed by the fast Fourier transform algorithm ubiquitous in signal processing.

Significance of the Z Transform in CSP

We’ve come a long way on the CSP Blog without needing the Z transform. So, it can’t be crucial. What it does help with, though, is understanding valuable signal-processing structures that can facilitate advanced CSP.

For example, consider what I call tunneling. This is a way to use CSP together with highly efficient and effective polyphase channelizers, such as the modified DFT filterbank (The Literature [R192]-[R194], My Papers [31,43]), or the polyphase FFT filterbank, that quickly break up a wideband sampled-data signal into a set of sampled signals with low bandwidth. This multiplicity of narrowband sampled signals at the output of a polyphase channelizer span the bandwidth of the input signal, and can be selected for further processing or arithmetically modified and recombined, performing, effectively, linear time-invariant system processing. Filtering in other words.

In tunneling, we apply spectral-correlation estimators to pairs of the narrowband outputs of the filterbank, looking for known cycle frequencies if we have that information or looking exhaustively for any cycle frequencies if we don’t.

So when we want to use, modify, extend, or just understand such structures, we often encounter their block diagrams which contain Z-transform-related notations and structures, as in Figures 7 and 8, which show the analysis and synthesis portions of the modified-DFT filterbank. Delays are indicated by z^{-N} notation, where N is the delay. Now you know exactly why.

Figure 7. The analysis (forward) and synthesis (reverse) structures in the basic modified DFT filterbank, which is one of many polyphase filterbanks used in signal processing (from My Papers [31]).
Figure 8. The analysis (forward) and synthesis (reverse) structures in the polyphase version of the modified DFT filterbank (My Papers [31]).

Previous SPTK Post: Practical Filters Next SPTK Post: Digital Filters

Author: Chad Spooner

I'm a signal processing researcher specializing in cyclostationary signal processing (CSP) for communication signals. I hope to use this blog to help others with their cyclo-projects and to learn more about how CSP is being used and extended worldwide.

2 thoughts on “SPTK: The Z Transform”

  1. Chad, aren’t Figure 7 and 8 labeled incorrectly? Each represents an analysis and synthesis MDFT filterbank, but the second one incorporates an IDFT/DFT as described in

    T. Karp and N. J. Fliege, “Modified DFT filter banks with perfect reconstruction,” in IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol. 46, no. 11, pp. 1404-1414, Nov. 1999, doi: 10.1109/82.803480.

    The equivalence between the structures in Figure 7 and 8 is apparent once the relationship between h_i and g_i is understood.

Leave a Comment, Ask a Question, or Point out an Error

Discover more from Cyclostationary Signal Processing

Subscribe now to keep reading and get access to the full archive.

Continue reading