# Signal Processing Operations and CSP

How does the cyclostationarity of a signal change when it is subjected to common signal-processing operations like addition, multiplication, and convolution?

It is often useful to know how a signal processing operation affects the probabilistic parameters of a random signal. For example, if I know the power spectral density (PSD) of some signal $x(t)$, and I filter it using a linear time-invariant transformation with impulse response function $h(t)$, producing the output $y(t)$, then what is the PSD of $y(t)$? This input-output relationship is well known and quite useful. The relationship is

$\displaystyle S_y^0(f) = \left| H(f) \right|^2 S_x^0(f). \hfill (1)$

In (1), the function $H(f)$ is the transfer function of the filter, which is the Fourier transform of the impulse-response function $h(t)$.

Because the mathematical models of real-world communication signals can be constructed by subjecting idealized textbook signals to various signal-processing operations, such as filtering, it is of interest to us here at the CSP Blog to know how the spectral correlation function of the output of a signal processor is related to the spectral correlation function for the input. Similarly, we’d like to know such input-output relationships for the cyclic cumulants and the cyclic polyspectra.

Another benefit of knowing these CSP input-output relationships is that they tend to build insight into the meaning of the probabilistic parameters. For example, in the PSD input-output relationship (1), we already know that the transfer function at $f = f_0$ scales the input frequency component at $f_0$ by the complex number $H(f_0)$. So it makes sense that the PSD at $f_0$ is scaled by the squared magnitude of $H(f_0)$. If the filter transfer function is zero at $f_0$, then the density of averaged power at $f_0$ should vanish too.

So, let’s look at this kind of relationship for CSP parameters. All of these results can be found, usually with more mathematical detail, in My Papers [6, 13].

First let’s consider an old friend, the sum of signals. We’ve encountered this operation before, when we discussed and illustrated signal selectivity, the key property of cyclostationary probabilistic parameters like the spectral correlation function. Suppose we have the sum of $N$ statistically independent signals with zero mean values

$\displaystyle x(t) = \sum_{j=1}^N s_j(t). \hfill (2)$

When we look at the second-order moment of $x(t)$,

$\displaystyle R_x(t, [\tau_1, \tau_2]) = E \left[ x(t + \tau_1)x^*(t + \tau_2)\right] \hfill (3)$

we’ll see a lot of cross terms such as $E[s_1(t+\tau_1)s_2^*(t-\tau_2)]$ which are zero because of the assumptions of independence and zero mean value. The expectation picks out only the ‘auto’ terms such as $E[s_1(t+\tau_1)s_1^*(t-\tau_2)]$. We end up with the sum of autocorrelations

$\displaystyle R_x(t, [\tau_1, \tau_2]) = \sum_{j=1}^N R_{s_j}(t, [\tau_1, \tau_2]). \hfill (4)$

It follows that the cyclic autocorrelation functions are additive too, for the sum of independent signals. Since the spectral correlation function is just the Fourier transform of the cyclic autocorrelation, and the Fourier transform is linear, then the spectral correlation function for the sum of independent signals is the sum of their spectral correlation functions:

$\displaystyle S_x^\alpha(f) = \sum_{j=1}^N S_{s_j}^\alpha (f). \hfill (5)$

These second-order relations are actually special cases of the more general relations involving $n$th-order cumulants. It is known that the cumulant function is cumulative for the sum of independent variables, so that we immediately have

$\displaystyle C_x(t,\boldsymbol{\tau}; n,m) = \sum_{j=1}^N C_{s_j}(t, \boldsymbol{\tau}; n,m), \hfill (6)$

from which it follows that the cyclic cumulants are cumulative too,

$\displaystyle C_x^\alpha(\boldsymbol{\tau}; n,m) = \sum_{j=1}^N C_{s_j}^\alpha (\boldsymbol{\tau}; n,m). \hfill (7)$

And from this result we know that the reduced-dimension cyclic cumulant and the cyclic polyspectra are also additive. These relations form the basis of the signal-selectivity property that is so useful in cyclostationary signal processing.

### Linear Time-Invariant Transformations (Filters)

Consider a linear time-invariant system with impulse-response function $h(t)$ and transfer function $H(f)$, where $h(t)$ and $H(f)$ are a Fourier-transform pair. Such systems are usually referred to as simply filters. It is straightforward (but tedious) to use the convolution integral representation of the filter’s input-output characteristic to determine the input-output relations for the cyclic cumulants and cyclic polyspectra. First, let’s define the input and output. The input is $x(t)$ and the output is $y(t)$, which are related in the time domain by

$\displaystyle y(t) = x(t) \otimes h(t), \hfill (8)$

or

$\displaystyle y(t) = \int x(u) h(t-u) \, du \hfill (9)$

$\displaystyle y(t) = \int x(t-u) h(u) \, du. \hfill (10)$

In the frequency domain, using the convolution theorem, we have the input-output relation

$\displaystyle Y(f) = H(f) X(f), \hfill (11)$

assuming, with good reason, that $H(f)$ exists.

The cyclic cumulant input-output relation for $x(t), y(t)$ is given by

$\displaystyle C_y^\alpha (\boldsymbol{\tau}; n,m) = \int \cdots \int \left[ \prod_{j=1}^n h^{(*)_j} (\lambda_j) \right] C_x^\alpha(\boldsymbol{\tau - \boldsymbol{\lambda}}; n,m), \hfill (12)$

and

$\displaystyle P_y^\alpha (\boldsymbol{g}; n,m) = \left[ H^{(*)_n}((-)_n[\alpha - \boldsymbol{1}^\dagger \boldsymbol{g}]) \prod_{j=1}^{n-1} H^{(*)_j} ((-)_j g_j) \right] P_x^\alpha(\boldsymbol{g}; n,m). \hfill (13)$

### Delay (Special Case of Filtering)

A delay (or advance) can be represented by a linear time-invariant system with an impulsive impulse response function,

$h(t) = \delta (t - D). \hfill (14)$

Using this in (12) yields the effect on the cyclic cumulant of a delay (or advance, depending on the sign of $D$):

$\displaystyle C_y^\alpha(\boldsymbol{\tau}; n,m) = C_x^\alpha(\boldsymbol{\tau} - \boldsymbol{1}D; n,m). \hfill (15)$

The lag-shifted cyclic cumulant can be shown to be the unshifted cyclic cumulant multiplied by a phase factor that depends on the delay $D$,

$\displaystyle C_x^\alpha(\boldsymbol{\tau} - \boldsymbol{1}D; n,m) = C_x^\alpha(\boldsymbol{\tau}; n,m) e^{-i2\pi \alpha D}. \hfill (16)$

So a delay induces a phase shift of the cyclic cumulant, but does not affect its magnitude. And the cyclic cumulant is still centered at whatever its center point was in $n$-dimensional lag ($\boldsymbol{\tau}$) space that it had before the delay operation was used.

### Product Modulation (Multiplication by Another Signal)

Next, consider the multiplication of two statistically independent signals,

$\displaystyle y(t) = z(t) x(t). \hfill (17)$

We’ll look at some examples below. It is easy to show what happens with product modulation to the $n$th-order cyclic moment functions. This is due to the assumption of statistical independent between $x(t)$ and $z(t)$. We have

$\displaystyle R_y(t, \boldsymbol{\tau}; n,m) = R_z(t, \boldsymbol{\tau}; n,m) R_x(t, \boldsymbol{\tau}; n,m), \hfill (18)$

which follows from the properties of the expectation operator and statistical independence. We can substitute the expression relating the temporal moment function to the cyclic temporal moment functions for each of $x$ and $z$ to obtain

$\displaystyle R_y(t, \boldsymbol{\tau}; n,m) = \left[ \sum_{\beta} R_z^\beta(\boldsymbol{\tau}; n,m) e^{i2 \pi \beta t} \right] \left[ \sum_{\gamma} R_x^\gamma (\boldsymbol{\tau};n,m) e^{i 2 \pi \gamma t} \right]. \hfill (19)$

Now, we know we can extract the cyclic moment from the temporal moment $R_y(t, \boldsymbol{\tau};n,m)$ by Fourier-series analysis,

$\displaystyle R_y^\alpha (\boldsymbol{\tau}; n,m) = \langle R_y(t, \boldsymbol{\tau};n,m) e^{-i 2 \pi \alpha t} \rangle, \hfill (20)$

where the angle brackets denote infinite time averaging. This leads to a formula that shows the mixing of the cyclic features for the two signals,

$\displaystyle R_y^\alpha (\boldsymbol{\tau}; n,m) = \sum_\beta R_z^\beta (\boldsymbol{\tau}; n,m) R_x^{\alpha - \beta} (\boldsymbol{\tau};n,m), \hfill (21)$

where $\alpha - \beta$ must equal a cycle frequency $\gamma$ for $x(t)$. So the sum is over all pairs of cycle frequencies for $z(t)$ and $x(t)$ that sum to $\alpha$.

That’s about as far as I know how to take product modulation without adding some further property to either $x(t)$ or $z(t)$. The cyclic cumulant doesn’t have a simple formula for the general case. However, if one of the signals, say $x(t)$ is non-random, then we can indeed find a formula for the cyclic cumulants of $y(t)$.

First, notice that for a non-random signal (in our case this means a constant, periodic signal, or polyperiodic signal), the moment function is equal to the lag product itself. That is, consider the $n$th-order lag product

$\displaystyle L_x(t, \boldsymbol{\tau};n,m) = \prod_{j=1}^n x^{(*)_j}(t + \tau_j), \hfill (22)$

We have

$E\left[ L_x(t, \boldsymbol{\tau};n,m) \right] = L_x(t, \boldsymbol{\tau};n,m). \hfill (23)$

So our general temporal moment formula becomes, for any order $n,$

$\displaystyle R_y(t, \boldsymbol{\tau}; n,m) = R_z(t, \boldsymbol{\tau}; n,m) L_x(t, \boldsymbol{\tau};n,m) \hfill (24)$

When we find the expression for the temporal cumulant for $y(t)$ by combining lower-order moments, each lower-order moment will have the form (24) for some order $n^\prime \leq n$, so that we end up with the cumulant for $z(t)$ multiplied by the lag product (moment) for $x(t)$,

$\displaystyle C_y(t, \boldsymbol{\tau};n,m) = C_z(t, \boldsymbol{\tau};n,m) L_x(t, \boldsymbol{\tau};n,m) \hfill (25)$

or

$\displaystyle C_y(t, \boldsymbol{\tau};n,m) = C_z(t, \boldsymbol{\tau};n,m) R_x(t, \boldsymbol{\tau};n,m), \hfill (26)$

so that the cyclic cumulant is given by the mixture of the cyclic cumulants for $z(t)$ and the cyclic moments (Fourier components of the lag product in this case) of $x(t)$,

$\displaystyle C_y^\alpha(\boldsymbol{\tau};n,m) = \sum_\beta C_z^\beta(\boldsymbol{\tau};n,m) R_x^{\alpha - \beta}(\boldsymbol{\tau};n,m). \hfill (27)$

The conclusion is that when a cyclostationary signal is multiplied by periodic (or polyperiodic) function, the resulting signal has cycle frequencies that are shifted versions of the cycle frequencies for the original signal. For example, if $z(t)$ has a cycle frequency of $0.1$, then $y(t)$ will have that cycle frequency only if $L_x(t, \boldsymbol{\tau};n,m)$ has a cycle frequency of $0$. That is, only if the $n$th-order lag product for $x(t)$ has a non-zero average value. Even if it does have a non-zero average value, that value might be small. Thus, product modulation has serious implications for the output cycle frequencies in terms of those of the input.

A useful special case of product modulation is had by setting the non-random signal $x(t)$ equal to a complex constant $x(t) = A = Be^{i\theta}$, where $A$ is complex, $B$ is real and non-negative, and $\theta$ is real. In this case

$\displaystyle L_x(t, \boldsymbol{\tau};n,m) = (A)^{n-m} (A^*)^m = B^n e^{i\theta(n-2m)}.$

For example, if we knew the cyclic cumulants for some signal $s(t),$ then we can easily find the cumulants for $s(t)e^{i\phi_0}$, where $\phi_0$ could represent an unknown carrier phase.

### Time Gating (Special Case of Product Modulation)

By time gating I mean the signal of interest appears only in regularly spaced time windows or gates. This may happen if a communication system allows the signal to transmit only during certain time slots, or if a receiver follows a schedule for signal detection that allows reception only during brief well-separated regularly spaced intervals. In either case, we might model the signal as the ungated (undisturbed) signal multiplied by a periodic binary-valued gating or windowing function. Mathematically, we have the model

$\displaystyle y(t) = x(t) z(t), \hfill (28)$

where $z(t)$ is the cyclostationary signal of interest and $x(t)$ is the binary periodic function defined by

$\displaystyle x(t) = \sum_{k=-\infty}^\infty r_{T_2} (t - k T_1). \hfill (29)$

Here the $r_{T_2}(\cdot)$ function is a unit-height rectangle, centered at the origin, and with width $T_2$. So $z(t)$ is a pulse train, provided $T_1 > T_2$. Because $x(t)$ is periodic with period $T_1$, so are its lag products. Therefore, the lag products have cycle frequencies (Fourier frequencies) equal to $k/T_1$.

We used this kind of gating function in My Papers [34], where we were trying to detect the presence of a cyclostationary signal, but were allowed to receive RF data only during short periodically occurring time windows.

Here is a numerical example illustrating the time-gating effect. We consider a textbook BPSK signal with square-root raised-cosine pulses having roll-off of $0.5$, a symbol rate of $1/T_0 = 1/10$, and a carrier offset frequency of $0.05$. The gating function is a binary rectangular pulse train, with $T_1 = 256$ samples and $T_2 = 64$ samples. The blindly estimated (using the SSCA) cycle frequencies and their associated spectral correlation magnitudes for the BPSK signal and its gated version are shown here:

So for the ungated signal, we see the prominent BPSK cycle frequency at $0.1$, as expected. For the gated signal, we see $0.1$ as well, but also several other cycle frequencies offset from $0.1$ by multiples of $1/256$. Similarly, the power-related non-conjugate cycle frequency of $0$ is scaled and shifted to various harmonics of $1/256$. You can even see a sinc-function like shape to the variation of the gated-signal spectral correlation magnitudes, as one might expect since the gating signal involves rectangles.

### Frequency Translation (Special Case of Product Modulation)

An important special case of product modulation is frequency translation, where a signal is multiplied by a sine wave in order to shift its support in frequency. For example, a baseband signal is translated to an intermediate frequency by multiplying it by a carrier sine wave.

Mathematically, we have

$\displaystyle y(t) = z(t) e^{i2\pi f_1 t}, \hfill (30)$

where $f_1$ can be positive or negative. So in this case $x(t) = e^{i2\pi f_1 t}$ and the lag product is

$\displaystyle L_z (t, \boldsymbol{\tau};n,m) = \prod_{j=1}^n \left[ e^{i2\pi f_1 (t+\tau_j)} \right]^{(*)_j}, \hfill (31)$

or

$\displaystyle L_z (t, \boldsymbol{\tau};n,m) = \left[ \prod_{j=1}^n e^{(-)_j i 2 \pi f_1 t} \right] \left[ \prod_{j=1}^n e^{(-)_j i 2 \pi f_1 \tau_j} \right] \hfill (32)$

The first bracketed term is the only one that depends on time, and is equivalent to

$\displaystyle e^{i 2\pi f_1 t (n-2m)}$

because there are $m$ conjugations. The second bracketed term is a phase factor that depends on the frequency $f_1$ and the various delays $\tau_j$. So, the multiplication of the signal $z(t)$ by a sine wave ends up shifting all of $z(t)$‘s cycle frequencies by $(n-2m)f_1$.

Note that when $m = n/2$, $n-2m = 0$ and the cycle frequencies for $y(t)$ are the same as those for $z(t)$. In particular, when $(n,m) = (2, 1)$, the non-conjugate cycle frequencies for $y(t)$ are unchanged from those for $z(t)$. Frequency shifting does not affect non-conjugate cycle frequencies.

Another important signal processing operation is periodic sampling. My colleague Antonio Napolitano has examined that topic in great detail. So if you can’t wait for a future post on it here at the CSP Blog, go check out his work! (Start in The Literature, and see this post about his textbooks.)