# Cyclic Polyspectra

Higher-order statistics in the frequency domain for cyclostationary signals. As complicated as it gets at the CSP Blog.

In this post we take a first look at the spectral parameters of higher-order cyclostationarity (HOCS). In previous posts, I have introduced the topic of HOCS and have looked at the temporal parameters, such as cyclic cumulants and cyclic moments. Those temporal parameters have proven useful in modulation classification and parameter estimation settings, and will likely be an important part of my ultimate radio-frequency scene analyzer.

The spectral parameters of HOCS have not proven to be as useful as the temporal parameters unless you include the trivial case where the moment/cumulant order is equal to two. In that case, the spectral parameters reduce to the spectral correlation function, which is extremely useful in CSP (see the TDOA and signal-detection posts for examples).

When we started thinking about generalizing second-order cyclostationarity (SOCS) to HOCS in the time domain, it seemed natural to consider higher-order moment functions. This followed from thinking of the time-varying autocorrelation function as a second-order moment. So, what can be learned from third-order time-varying moments, or fourth-order, or $n$th-order? That led us to cumulants, and in particular cyclic cumulants, which have the same highly desirable signal-selectivity properties as does the spectral correlation function. Higher-order temporal moments, on the other hand, do not always possess the properties we seek.

Now we wish to look at HOCS in the frequency domain. We can parallel our development in the time domain by recalling that the spectral correlation function is a second-order frequency-domain moment. Any guesses about which class of functions–spectral moments or spectral cumulants–will be most useful? Most well-behaved?

### Second-Order Spectral Moments and Cumulants

Let’s start off by reviewing what we did for second order: the spectral correlation function. This is a kind of second-order moment involving a frequency-domain representation of two components of the signal. Recall we first defined the second-order product of interest, the cyclic periodogram, which is the product of two downconverted narrowband components of the signal under study $x(t)$,

$I_T^\alpha (t, f) = \displaystyle\frac{1}{T} X_T(t, f-\alpha/2) X_T^*(t, f+\alpha/2), \hfill (1)$

as the amount of processed data increases without bound, and then the spectral resolution ($B = 1/T$) is allowed to decrease to zero,

$S_x^\alpha (f) = \displaystyle\lim_{T\rightarrow\infty} \lim_{U\rightarrow\infty} \displaystyle\frac{1}{U} \int_{-U/2}^{U/2} I_T^\alpha(t, f) \, dt. \hfill (2)$

In these equations, I define the narrowband spectral component in the following way

$\displaystyle X_T(t, f) = \int_{t-T/2}^{t+T/2} x(u) e^{-i2 \pi f u}\, du. \hfill (3)$

### Higher-Order Spectral Moments

Let’s simply look at a higher-order product of spectral components,

$\displaystyle S_x(\Delta, \mathbf{f}; n,m) = \left\langle \prod_{j=1}^n X_T^{(*)_j} (t, (-)_j f_j) \right\rangle, \hfill (4)$

where $\Delta = 1/T$ is the approximate bandwidth of the spectral components, and $\left\langle \cdot \right\rangle$ denotes an infinite time-average operation. Let me try to justify the appearance of the optional conjugations $(*)_j$ and corresponding optional negations $(-)_j$. Consider some arbitrary collections of time-series (signals) $\displaystyle \{ y_j(t) \}_{j=1}^n$. An $n$-th order spectral moment for these $n$ signals is simply

$\displaystyle S_y(\Delta, \mathbf{f}; n,m) = \left\langle \prod_{j=1}^n Y_{j,T} (t, f_j) \right\rangle. \hfill (5)$

Now if we make the particular choice $y_j(t) = x^{(*)_j} (t)$, we end up with (4). So the particular conjugation configuration determines which factors in the moment have conjugations and corresponding negated frequencies.

It isn’t too hard to show that the spectral moment function (4) is nonzero only for a certain subset of the $n$ frequencies $f_j$. The frequencies must sum to an $n$th-order cycle frequency (corresponding to $m$ optional conjugations) for $x(t)$,

$\displaystyle \sum_{j=1}^n f_j = \alpha. \hfill (6)$

That is, the sum of the $n$ frequencies has to equal a cycle frequency for the set $\displaystyle \{ y_j(t)\}_{j=1}^n = \{x^{(*)_J} (t)\}_{j=1}^n$.

What we name the actual $n$-th order spectral moment function (SMF) is the function (4) as the bandwidth of the narrowband components approaches zero, $\Delta \rightarrow 0$ or $T \rightarrow \infty$,

$\displaystyle S_x(\mathbf{f}; n,m) = \lim_{\Delta \rightarrow 0} S_y(\Delta, \mathbf{f}; n,m). \hfill (7)$

Restricting our attention to those subsets of $n$-dimension frequency space for which the SMF is potentially non-zero, we see from (6) that the $n$ frequencies are not independent. For a given cycle frequency $\alpha$, the $n$ frequencies can be written as $n-1$ independently chosen frequencies followed by the $n$th frequency which is determined by the others and $\alpha$,

$\displaystyle \mathbf{f} = [f_1\ f_2\ f_3\ \ldots f_{n-1}\ f_n] = [f_1\ f_2\ f_3\ ... f_{n-1}\ (\alpha - \sum_{j=1}^{n-1} f_j )]. \hfill (8)$

For convenience, let’s define the $\mathbf{g}$ vector as the first $n-1$ elements of $\mathbf{f}$,

$\displaystyle \mathbf{g} = [g_1\ g_2\ \ldots g_{n-1}] = [f_1\ f_2\ \ldots f_{n-1}]. \hfill (9)$

Then the reduced-dimension spectral moment function (RD-SMF) is defined by

$\displaystyle \bar{S}_x^\alpha (\mathbf{g}; n,m) = S_x([\mathbf{g}\ \ (\alpha - \sum_{j=1}^{n-1} g_j)]; n,m) \hfill (10)$

It turns out that this function is the $(n-1)$-dimensional Fourier transform of the reduced-dimension cyclic temporal moment function (RD-CTMF),

$\displaystyle \bar{S}_x^\alpha (\mathbf{g}; n,m) = \int_{-\infty}^\infty \cdots \int_{-\infty}^\infty \bar{R}_x^\alpha (\mathbf{u}; n,m) e^{-i 2 \pi \mathbf{g} \mathbf{u}^\dagger} \, d\mathbf{u}, \hfill (11)$

although in many cases it contains impulses (Dirac delta functions) or even products of impulses, so it exists only in a formal mathematical sense (not in a usual-definition-of-a-function sense). This happens for order $n=2$ as well. Consider a signal that contains an additive sine-wave component, such as

$\displaystyle x(t) = s(t) + e^{i 2 \pi f_0 t} + w(t), \hfill (12)$

where the signal $s(t)$ has a continuous spectrum (no additive sine-wave components) and $w(t)$ is additive white Gaussian noise. Then we know that the spectrum (the ‘second-order spectral moment’ using our terminology in this post) must contain an impulse due to the sine wave with frequency $f_0$. So we’re used to impulses in our second-order spectral moments. For the higher orders, though, there are generally more impulses and there can be products of impulses in the spectral moment function.

The reduced-dimension spectral moment and the spectral moment are related by an impulsive function,

$\displaystyle S_x(\mathbf{f}; n,m) = \sum_{\alpha} \bar{S}_x^\alpha (\mathbf{g};n,m) \delta(\mathbf{f}^\dagger \mathbf{1} - \alpha). \hfill (13)$

So the spectral moment is zero unless its $n$ frequencies sum to an $n$th-order cycle frequency (with $m$ conjugations). And in that case, it is impulsive.

Recalling our discussions of the higher-order temporal characterizations of cyclostationary signals, we see that the spectral moment (and its reduced-dimension version) don’t help us clearly answer questions about the value of higher-order measures relative to second order. We have to go to cumulants for that.

### Higher-Order Spectral Cumulants

Since the spectral moment was obtained using the infinite-time average of the finite-resolution spectral components (4) and then letting the spectral resolution $\Delta$ approach zero in (7), we start our spectral cumulant investigation by looking at the cumulant of the $n$ narrowband components,

$\displaystyle P_x(\Delta, \mathbf{f}; n,m) = \mbox{\rm Cumulant} \left\{ X_T^{(*)_j} (t, (-)_j f_j) \right\}_{j=1}^n. \hfill (13)$

Then we let the spectral resolution $\Delta \rightarrow 0$, which means that the narrowband spectral components are estimated with an increasingly long data segment (length $T$), and obtain the spectral cumulant function

$\displaystyle P_x(\mathbf{f}; n,m) = \lim_{\Delta \rightarrow 0} P_x(\Delta, \mathbf{f}; n,m). \hfill (14)$

This spectral cumulant is the sum of products of lower-order (spectral) moments, just like all cumulants. But each lower-order spectral moment is either zero or impulsive. It is impulsive if the subset of $\mathbf{f}$ corresponding to its partition itself sums to a lower-order cycle frequency, and is zero otherwise. Since the $n$th-order cumulant can always be written as the $n$th-order moment plus several weighted products of lower-order moments, this means that the spectral moment function will equal the spectral cumulant function, and will be nonzero, if the sum of the $n$ frequencies is an $n$th-order cycle frequency and none of the partition-induced subsets of $\mathbf{f}$ themselves sum to the appropriate-order cycle frequencies.

I’m avoiding writing all that down because the mathematical notation gets really complex. You can see it all, though, in My Papers [5,6] and my downloadable dissertation (item 2 on the Downloads page).

The subsets of $n$-dimensional space for which the sum-of-lower-order spectral moments is not zero are called the $\beta$-submanifolds. When we pick $n$th-order frequency vectors $\mathbf{f}$ that do not lie on the $\beta$-submanifolds, then we can estimate the spectral cumulant by estimating the spectral moment.

When we reduce the dimension of the spectral cumulant, as we did for the spectral moment, we obtain a similar characterization,

$\displaystyle P_x(\mathbf{f}; n,m) = \sum_\alpha \bar{P}_x^\alpha (\mathbf{g}; n,m) \delta (\mathbf{f}^\dagger \mathbf{1} - \alpha), \hfill (15)$

where $\displaystyle \bar{P}_x^\alpha (\mathbf{g}; n,m)$ is the $n$th-order cyclic polyspectrum. It can be shown that the cyclic polyspectrum is the $n-1$ dimensional Fourier transform of the $n$th-order reduced-dimension cyclic temporal cumulant function,

$\displaystyle \bar{P}_x^\alpha (\mathbf{g}; n,m) = \int_{=\infty}^\infty \cdots \int_{-\infty}^\infty \bar{C}_x^\alpha (\mathbf{u}; n,m) e^{-i 2 \pi \mathbf{g} \mathbf{u}^\dagger} \, d\mathbf{u}. \hfill (16)$

If the reduced-dimension cyclic cumulant is integrable, then the Fourier transform exists and has no impulsive components. This, in turn, will be the case if the cumulant decays appropriately rapidly in all directions in $n-1$-dimensional space. All that means is that the underlying signal $x(t)$ possesses a correlation property such that widely separated samples become uncorrelated as the separation gets large, a property that almost all signals of practical interest possess.

### Putting it All Together

We’ve looked at the second-order temporal parameters of cyclostationary signals, the second-order spectral parameters, the higher-order temporal parameters, and finally in this post, we’ve looked at the higher-order spectral parameters. The relationships between the various parameters are summarized in the following figure:

The set of mathematical objects we’ve defined and evaluated encompasses many familiar objects from various parts of signal processing (and beyond), including the autocorrelation, power spectrum, cyclic autocorrelation, spectral correlation function, conjugate cyclic autocorrelation, conjugate spectral correlation function, conventional polyspectrum, conventional temporal cumulant, cyclic cumulant, and cyclic polyspectrum.

There are other objects of great interest to us when we apply these basic tools, such as the spectral coherence function, which do not appear in the golden diamond figure above.

We’ll discuss some of the connections to more commonly known mathematical objects in a future post, as a means of potentially harnessing previously gained intuition from practical signal-processing experience and formal study in stochastic-process theory.

### Estimators and the $\beta$-Submanifold Problem

There are at least three ways to estimate the cyclic polyspectrum (My Papers [6]).

The first is to estimate the reduced-dimension cyclic cumulant on a hypercube (many values of $\mathbf{u}$), then perform an $n-1$-dimensional Fourier transform.

The second is to estimate the spectral moment function, being careful to avoid $\beta$-submanifolds, with a generalization of the frequency-smoothing method of spectral correlation estimation. This also requires estimation on a hypercube of dimension $n-1,$ only this time the hypercube corresponds to $\mathbf{g}$.

The third is a generalization of the time-smoothing method of spectral correlation estimation. This method simply averages, over time, products of narrowband spectral components, being careful to avoid $\beta$-submanifolds. It can be used for arbitrary frequency vectors $\mathbf{g}$. That is, to obtain an estimate of the cyclic polyspectrum on some one-dimensional subspace of $\mathbf{g}$, one need not estimate anything on any hypercubes.

So the time-smoothing method is the only one of the three that has a low computational cost because it can be directly used to estimate the cyclic polyspectrum on a low-dimensional slice through its high-dimensional domain.

### The Cyclic Polyspectrum for Digital QAM/PSK

The cyclic cumulant function and the cyclic polyspectrum are known (My Papers [5,6]) for the case of textbook digital QAM/PSK. Such signals have the model

$\displaystyle x(t) = \sum_{k=-\infty}^\infty a_k p(t - kT_0 -t_0) e^{i 2 \pi f_0 t + i\phi_0} \hfill (17)$

The cycle frequencies for such signals take the form

$\displaystyle \alpha (n,m) = (n-2m)f_0 + k/T_0 \hfill (18)$

So we can speak of the “$(n,m,k)$ cyclic cumulant” or the “$(n,m,k)$ cyclic polyspectrum.” The cyclic cumulant and cyclic polyspectrum for a particular set of $(n,m,k)$ may be zero, however, depending on the nature of the constellation embodied by the sequence $a_k$, and on the nature of the pulse function $p(t)$. The values of the symbol cumulant function are tabulated in this post for several modulation types.

The theoretical cyclic polyspectrum for $f_0 = \phi_0 = 0$ is given by

$\displaystyle \bar{P}_x^\alpha (\mathbf{g}; n,m) = \frac{C_a(n,m)}{T_0} P^{(*)_n}((-)_n[\alpha - \mathbf{g}\mathbf{1}^\dagger]) \prod_{j=1}^{n-1} P^{(*)_j} ((-)_jf_j) e^{i 2 \pi \alpha t_0} \hfill (19)$

The function $C_a(n,m)$ is the cumulant for the set of variables $\left\{ a_k^{(*)_1}, a_k^{(*)_2}, \ldots, a_k^{(*)_n}\right\}$, and $P(f)$ is the Fourier transform of $p(t)$.

The post on signal processing operations can be used to find the generalization of (19) for the case where $f_0$ is not zero.

### Numerical Examples

We use the generalization of the time-smoothing method for our cyclic polyspectrum estimator and evaluate (19) for the theoretical function.

The signal is QPSK, with pulse function given by the square-root raised-cosine pulse with roll-off $0.5$ (excess bandwidth of $50$%). The signal uses $T_0 = 8$ and $f_0 = 0.015625 = 1/64$.

Let’s start with $n=2$. We have an excellent comparison between measured and theoretical cyclic polyspectra:

That second-order result helps to validate the estimator. Notice that the $k=\pm 1$ features for $(n,m) = (2, 1)$ are shifted away from the carrier, which has value of zero. This is because the CP for order two is a frequency shifted version of the spectral correlation function.

Now let’s turn to the various fourth-order cyclic polyspectra. Each is estimated for a range of $g_1$ and for specific values, as stated below, for the remaining frequencies $g_2$ and $g_3$. The fourth frequency is determined by $g_1, g_2,$ and $g_3$ and the cycle frequency $\alpha = (n-2m)f_0 + k/T_0$.

The $(4, 2, 0)$ cyclic polyspectrum for $g_2 = 0.0332$ and $g_3 = 0.0245$ is shown here:

The zero near $-0.05$ is due to a $\beta-$submanifold. The cyclic polyspectrum is not estimated for this frequency. One could optionally interpolate between neighboring points to fill this in.

The $(4, 2, 1)$ cyclic polyspectrum for $g_2 = 0.0332$ and $g_3 = 0.0245$ and the $(4,2,2)$ cyclic polyspectrum for $g_2 = 0.0777$ and $g_3 = 0.0654$ is shown next:

And finally the $(4, 0, 0)$ cyclic polyspectrum for $g_2 = 0.0332$ and $g_3 = 0.0245$ is shown here:

### Final Thoughts

The cyclic polyspectrum is difficult to interpret and to compute due to the presence of the $\beta$-submanifolds, which become more numerous as the number of lower-order cycle frequencies increases (think rectangular pulse $p(t)$ or DSSS) and as the order increases.

The application of the cyclic polyspectrum that is most appealing is channel estimation. The equation (19) is the key. The effect of a linear time-invariant propagation channel $H(f) = \mbox{\cal{F}}[h(t)]$ leads to an new effective pulse

$\displaystyle P_*(f) = H(f)P(f) \hfill (20)$

Taking the logarithm of both sides transforms the product of pulse transforms involving $P_*(f)$ into a sum of those transforms. Choosing to evaluate the resulting equation for multiple choices for $\mathbf{g}$ leads to a linear system of equations in the logarithm of the channel transfer function, which might be invertible. I’ve made some progress on that front and hope to reveal it in a future post.

As always, let me know if you find any errors in the post or leave a comment if you have experience with the cyclic polyspectrum. I’d like to hear from you!

## Author: Chad Spooner

I'm a signal processing researcher specializing in cyclostationary signal processing (CSP) for communication signals. I hope to use this blog to help others with their cyclo-projects and to learn more about how CSP is being used and extended worldwide.