# Cyclic Temporal Cumulants

In this post I continue the development of the theory of higher-order cyclostationarity (My Papers [5,6]) that I began here. It is largely taken from my doctoral work (download my dissertation here).

This is a long post. To make it worthwhile, I’ve placed some movies of cyclic-cumulant estimates at the end. Or just skip to the end now if you’re impatient!

In my work on cyclostationary signal processing (CSP), the most useful tools are those for estimating second-order statistics, such as the cyclic autocorrelation, spectral correlation function, and spectral coherence function. However, as we discussed in post on Textbook Signals, there are some situations (perhaps only academic; see my question in the Textbook post) for which higher-order cyclostationarity is required. In particular, a probabilistic approach to blind modulation recognition for ideal (textbook) digital QAM, PSK, and CPM requires higher-order cyclostationarity because such signals have similar or identical spectral correlation functions and PSDs. (Other high-SNR non-probabilistic approaches can still work, such as blind constellation extraction.)

Recall that in the post introducing higher-order cyclostationarity, I mentioned that one encounters a bit of a puzzle when attempting to generalize experience with second-order cyclostationarity to higher orders. This is the puzzle of pure sine waves (My Papers [5]). Let’s look at pure and impure sine waves, and see how they lead to the probabilistic parameters widely known as cyclic cumulants.

### Sine Waves Times Sine Waves Equals Sine Waves

Suppose we have a single complex-valued sine wave given by

$\displaystyle x(t) = A_0e^{i2\pi f_0 t + i \phi_0}, \hfill (1)$

where $A_0$ is the amplitude, $f_0$ is the frequency, and $\phi_0$ is the phase. If we consider a quadratic operation on this signal to produce the signal $y(t)$, we will find another sine wave,

$\displaystyle y(t) = x(t+\tau_1) x(t+\tau_2) = A_0^2 \ e^{i2\pi 2f_0 t + i2\phi_0} e^{i2\pi f_0 (\tau_1 + \tau_2)}. \hfill (2)$

That is, this particular quadratic operation leads to a sine wave with frequency $2f_0,$ phase $2\phi_0 + 2\pi f_0 (\tau_1 + \tau_2)$, and complex amplitude $A_0^2$. If we had included a conjugation on the second factor, $x^*(t+\tau_2)$, the result is a trivial sine wave with zero frequency, amplitude $|A_0|^2$, and phase depending on $\tau_1$ and $\tau_2$.

Next consider a signal consisting of the sum of two sine waves with different frequencies,

$\displaystyle x(t) = A_0 e^{i2\pi f_0 t + i\phi_0} + A_1e^{i2\pi f_1 t + i\phi_1}. \hfill (3)$

The simple (no-conjugate) second-order operation $x(t+\tau_1)x(t+\tau_2)$ leads to

$\displaystyle y(t) = A_0^2 e^{i2\pi 2f_0 t + 2i\phi_0 + i2\pi f_0(\tau_1 + \tau_2)}$

$\displaystyle + A_1^2 e^{i2\pi 2f_1 t + 2\phi_1 +i2\pi f_1(\tau_1 + \tau_2)}$

$\displaystyle + 2A_0 A_1 e^{i2\pi (f_0 + f_1)t + i(\phi_0 + \phi_1)} \hfill (4)$

In other words, the quadratic transformation to $y(t)$ leads to sine-wave components with frequencies of $2f_0$, $2f_1$, and $f_0 + f_1$. Other quadratic operations, such as those with a single conjugated factor or two conjugated factors, lead to different sine-wave frequencies in $y(t)$, but they all produce at least one sine-wave component.

### Impure Sine Waves

Now recall that our working definition of cyclostationarity is that at least one non-trivial sine wave can be generated by subjecting the signal of interest to a quadratic nonlinearity, even when the signal itself contains no such sine-wave component. So how does the previous discussion of sine waves begetting sine waves’ apply?

A natural extension of second-order theory to higher orders is to simply increase the order of the moment that we considered when defining the cyclic autocorrelation function. The usual second-order lag product

$\displaystyle x(t+\tau/2)x^*(t-\tau/2)$

can be generalized to

$\displaystyle x(t + \tau_1)x^*(t + \tau_2)$

and then we could consider the third-order or fourth-order versions

$\displaystyle x(t+\tau_1)x^*(t+\tau_2) x(t+\tau_3)$

and

$\displaystyle x(t+\tau_1)x^*(t+\tau_2) x(t+\tau_3) x^*(t+\tau_4)$.

It turns out that the odd-order moments are zero for almost all communication signals, so going forward we’ll consider only even-order moments, such as the moment corresponding to the fourth-order lag product. That is, we consider the expected value of the lag product,

$\displaystyle R_x(t, \boldsymbol{\tau}; 4) = E \left[x(t+\tau_1)x^*(t+\tau_2) x(t+\tau_3) x^*(t+\tau_4)\right]. \hfill (5)$

We know that this fourth-order moment contains sine-wave components because the second-order lag product $x(t+\tau_1)x^*(t+\tau_2)$ contains sine waves, the second-order lag product $x(t+\tau_3)x^*(t+\tau_4)$ contains sine waves, and sine waves multiplied by sine waves results in sine waves. In particular, suppose $x(t)$ possesses a second-order non-conjugate cycle frequency $\alpha$ when the lags are $\tau_1$ and $\tau_2$. It also must possess the non-conjugate second-order cycle frequency of zero (it possesses power), so that the sine wave with frequency $\alpha$ in $x(t+\tau_1)x^*(t+\tau_2)$ multiplies the sine wave with frequency zero in $x(t+\tau_3)x^*(t+\tau_4)$, producing a sine wave with frequency $\alpha$ in the fourth-order lag product (for suitable values of $\tau_3$ and $\tau_4$, such as zero).

This kind of analysis can be performed for other choices of conjugations and lag-vector values $\tau_i$. The conclusion is that the sine-wave components in a higher-order lag product are due, at least in part, to the interactions between lower-order sine-wave components, because sine waves times sine waves equals sine waves.’ So is there a component of any of the fourth-order sine waves that is NOT due to these lower-order interactions?

We’ll call the sine waves in the lag product (the moment sine waves) impure sine waves, because they will often contain products of lower-order components. Any component of the impure sine waves that does not arise from the interactions of the lower-order sine waves will be called pure sine waves.

### Purifying Sine Waves

To purify the impure sine waves in some $n$th-order lag product, we’ll need to identify the impure culprits and subtract them. Let’s start with a simple case where the signal itself does, in fact, contain additive sine-wave components. Then the second-order lag product must contain sine waves, and we can set about purifying the second-order lag product, which simplifies our notation. Then we’ll extend the ideas to higher-order lag products.

Suppose our signal $x(t)$ consists of our old friend the rectangular-pulse BPSK signal plus an additive sine-wave component

$\displaystyle x(t) = s_{bpsk}(t) + A_0 e^{i2\pi 0.05 t + i\phi_0} + A_1 e^{i2\pi (-0.05) t + i\phi_1}. \hfill (6)$

Recall that the BPSK signal has a symbol rate of $0.1$ and a carrier frequency of $0.05$ (normalized-frequency parameters).  If we consider the non-conjugate cycle frequencies of $x(t)$, we see that the BPSK signal contributes the bit-rate harmonics $\alpha_{bpsk} = k/T_0 = 0.1k$, and that the two sine sine waves contribute $\alpha_{sine} = \{0, 0.1, -0.1\}$. For the conjugate cycle frequencies, the BPSK signal contributes

$\displaystyle \beta_{bpsk} = 2f_c + k/T_0 = 0.1 + 0.1k = \{\ldots, -0.5, -0.4, \ldots, 0.3, 0.4, 0.5, \ldots \}, \hfill (7)$

and the sine waves contribute $\beta_{sine} = \{0, 0.1, -0.1\}$. Therefore, for both the non-conjugate and conjugate cases, the BPSK and sine wave share cycle frequencies. However, only the BPSK signal produces second-order sine-wave components that are not the result of products of first-order sine waves.

One way to proceed with the purification in this case is to just subtract off the sine waves from $x(t)$ and then compute the cyclic autocorrelation or spectral correlation function of the result. But that method doesn’t generalize well to the case where there are no first-order sine waves. So let’s take a more systematic approach.

The lower-order (first-order) sine wave frequencies are $\pm 0.05$, and the sine-wave created frequencies in the second-order lag product are $\{0, 0.1, -0.1\}$. Considering the non-conjugate second-order cycle frequency of 0, we see that it has a component due to the BPSK signal and a component due to the tones. If the lag product in question is

$\displaystyle L_x(t, \boldsymbol{\tau}; 2, 1) = x(t+\tau_1)x^*(t+\tau_2), \hfill (8)$

then the sine-wave induced component has complex amplitude

$\displaystyle |A_0|^2 e^{i2\pi 0.05 (\tau_1 - \tau_2)} + |A_1|^2 e^{-i2\pi 0.05 (\tau_1 - \tau_2)}. \hfill (9)$

This is just the sum of the complex amplitudes for each sine wave multiplied by itself. So if we extract the zero-frequency sine wave from the second-order lag product, and then subtract from it the complex amplitude of each unique product of first-order sine waves with resultant frequency zero, we’ll be left with the complex amplitude due solely to the BPSK signal. The lower-order product terms will be removed, and we’ll have purified the second-order sine wave:

$P_x(\boldsymbol{\tau}; 2, 1) = \left\langle L_x(t, \boldsymbol{\tau}; 2, 1) e^{i 2 \pi 0 t} \right\rangle - \left\langle x(t+\tau_1)e^{i 2 \pi 0.05 t} \right\rangle \left\langle x^*(t+\tau_2)e^{i 2 \pi (-0.05) t} \right\rangle$

$\displaystyle - \left\langle x(t+\tau_1)e^{i2\pi (-0.05) t} \right\rangle \left\langle x^*(t+\tau_2) e^{i2\pi 0.05 t} \right\rangle. \hfill (10)$

The idea is to subtract all unique products of the lower-order sine waves such that the sum of the sine-wave frequencies for each product is equal to the specified second-order cycle frequency. Here we are targeting the second-order cycle frequency of $0$, so we subtract the product of first-order sine waves with frequencies $0.05$ (in $x(t+\tau_1)$) and $-0.05$ (in $x^*(t+\tau_2)$), and also the product with frequencies $-0.05$ (in $x(t+\tau_1)$) and $0.05$ (in $x^*(t+\tau_2)$).

Suppose the signal $x(t)$ actually contained $K$ sine-wave components with frequencies $\{f_k\}_{k=1}^K$. Let the second-order non-conjugate cycle frequency of interest be $\alpha$. Then we seek to subtract off lower-order products for each pair of frequencies $F_1$ and $F_2$ such that

$F_1 + (-)F_2 = \alpha. \hfill (11)$

In our example above, the frequency set is $\{0.05, -0.05\}$ and $\alpha = 0$. Then we have the two sets of pairs $(0.05, 0.05)$ and $(-0.05, -0.05)$. For $\alpha = 0.1$, we have the single pair $(0.05, -0.05)$, which makes sense because there is only one product of sine waves that has frequency $0.1$, and that is the sine wave with frequency $0.05$ in the unconjugated (first) factor, and $-0.05$ in the conjugated factor.

### Partitions of the Index Set $\{1, 2, \ldots, n\}$

A key phrase in the development above is “unique products of lower-order sine waves.” How do we characterize such unique products? For the general case of an $n$th-order moment, we are thinking about the sine-wave components of the $n$th-order lag product

$\displaystyle L_x(t, \boldsymbol{\tau};n,m) = \displaystyle \prod_{j=1}^n x^{(*)_j}(t+\tau_j), \hfill (12)$

where $(*)_j$ denotes an optional conjugation, and there are $m$ factors that are conjugated. We want to account for every possible product of lower-order sine waves whose frequencies sum to the cycle frequency of interest $\alpha$. So there could be a sine wave in $x^{(*)_1}(t+\tau_1) x^{(*)_2}x(t+\tau_2)$ that multiplies a sine wave in $x^{(*)_3}(t+\tau_3) \ldots x^{(*)_n}(t+\tau_n)$ and whose sum is $\alpha$, and maybe a sine wave in $x^{(*)_1}(t+\tau_1) x^{(*)_3}(t+\tau_3)x^{(*)_5}(t+\tau_5)$ that multiplies a sine wave in $x^{(*)_2}(t+\tau_2) x^{(*)_4}(t+\tau_4)x^{(*)_6}(t+\tau_6) \ldots x^{(*)_n}(t+\tau_n)$, or possibly any other partitioning of the $n$ involved factors $x^{(*)_j}(t+\tau_j)$.

So we have to consider all the ways of partitioning the set of factors, and then for each one of these partitions, we have to determine whether or not the lag products for the partitions contain sine-wave components that sum to the desired cycle frequency $\alpha$. Whew. That is indeed complicated.

But the purification idea leads to the idea of partitions of a set of $n$ elements, in this case the $n$ elements $x^{(*)_j}(t+\tau_j)$. For simplicity, we can associate these elements with the index set $I_n = \{1, 2, \ldots, n\}$. A partition of $I_n$ is a collection of proper subsets $\{\nu_k\}_{k=1}^p$ whose union is $I_n$ and whose pairwise intersections are all the empty set. This is just what we need to characterize all the possible ways lower-order sine waves can combine to contribute to the lag-product sine wave.

For $n=2$, we have $I_2 = {1, 2}$ and the partitions are

$q_1 = \{1, 2\}; p = 1 \hfill (13)$

$q_2 = \{\{1\}, \{2\}\}; p = 2 \hfill (14)$

and there are no others (no unique partitions, anyway, because $\{\{2\}, \{1\}\}$ is not unique). For $n=3$, the partitions are

$q_1 = \{1, 2, 3\}; p = 1 \hfill (15)$

$q_2 = \{\{1\}, \{2, 3\}\}; p = 2 \hfill (16)$

$q_3 = \{\{2\}, \{1, 3\}\}; p = 2 \hfill (17)$

$q_4 = \{\{3\}, \{1, 2\}\}; p = 2 \hfill (18)$

and

$q_5 = \{\{1\}, \{2\}, \{3\}\}; p = 3 \hfill (19)$

The number of partitions rapidly increases with the index size $n$. (The number of partitions is given by Bell’s number, which makes use of Stirling numbers of the second kind.) For example, there are $15$ partitions for $n=4$,

Fortunately, any partitions containing any element that has an odd size, such as a singleton $\{3\}$ or a three-element set $\{2, 3, 4\}$ can usually be ignored in the computation of a pure sine wave because the signal under study has no first-order sine-wave components nor any non-zero odd-order moments. So, typically for $n=4$, the partitions that matter are only

$\{1, 2, 3, 4\}$

$\{\{1, 2\}, \{3, 4\}\}$

$\{\{1, 3\}, \{2, 4\}\}$

$\{\{1, 4\}, \{2, 3\}\}$

The partitions of $I_n$ can be easily computed recursively.

Getting back to purifying sine waves, we have arrived at the idea that to find the pure sine-wave component of some $n$th-order lag product, we need to subtract from that lag product all products of lower-order sine-wave components that have frequencies that sum to the desired frequency, and this can be done by using the partitions of the index set. To ensure that we don’t end up subtracting off the same sine wave component multiple times, we wish to subtract from the lag product all products of lower-order pure sine-wave components.

We can get all the pure $n$th order sine waves at once by subtracting all the products of pure lower-order sine waves from the sum of all $n$th-order sine waves. That is, let $E^\alpha[\cdot]$ represent the general sine-wave extraction operator,

$\displaystyle E^\alpha[x(t)] = \sum_{\beta} \left\langle x(t) e^{-i2\pi\beta t} \right\rangle e^{i2\pi\beta t}, \hfill (20)$

where the sum is over all frequencies $\beta$ for which the signal $x(t)$ contains a sine-wave component with non-zero amplitude,

$\left\langle x(t) e^{-i2\pi\beta t} \right\rangle \neq 0. \hfill (21)$

In other words, $E^\alpha [\cdot]$ extracts all the impure sine waves from its argument.

Our purified sine waves are then given by

$\displaystyle P_x(t,\boldsymbol{\tau}; n,m) = E^\alpha[L_x(t,\boldsymbol{\tau};n,m)] - \sum_{P, p\neq 1} \prod_{j=1}^p P_x(t,\boldsymbol{\tau_j}; n_j, m_j), \hfill (22)$

where the sum is over all $n$th-order partitions of the index set $I_n$, and each partition has size $2 \leq p \leq n$ with elements $\{\nu_j\}_{j=1}^p$ having sizes $|\nu_j| = n_j$ and numbers of optional conjugations $m_j$.

### The Shiryaev-Leonov Moment-Cumulant Formula

In the theory of random variables and processes, the $n$th-order moment function is defined as the expected value of the product of $n$ variables. For simplicity, let’s consider $n$ random variables $X_j$ for $j=1, 2, \ldots, n$. Then the $n$th-order moment is

$\displaystyle R_{\boldsymbol X} (n) = E[\prod_{j=1}^n X_j]. \hfill (23)$

The moment also appears in the series expansion of the characteristic function,

$\displaystyle \Phi_{\boldsymbol X} (\boldsymbol{\omega}) = E\left[e^{i\boldsymbol{\omega}^\prime {\boldsymbol X}}\right], \hfill (24)$

which we won’t get into here. However, the cumulants appear in the series expansion of the logarithm of the characteristic function. With the moments, we can directly compute the value by time or ensemble averaging in practice, but how do we compute cumulants (leaving aside for now why we would even want to, but see this post if you can’t help yourself)? Fortunately for us, the relationship between the $n$th-order moment and the $n$th-order cumulant was derived long ago, by Shiryaev and Leonov (My Papers [13], Downloads).

The $n$th-order cumulant is given by a nonlinear combination of lower-order moments,

$\displaystyle C_{\boldsymbol X} (n) = \sum_P \left[ (-1)^{p-1} (p-1)! \prod_{j=1}^p R_{{\boldsymbol X}_j} (n_j) \right], \hfill (25)$

where the sum is over all unique partitions $P$ of the index set $I_n$, each having $p$ elements $\{\nu_j\}_{j=1}^p$. The notation ${\boldsymbol X}_j$ indicates the $n_j = |\nu_j|$ elements $\{X_k: k \in \nu_j\}$. The reverse formula, that gives the moment in terms of the lower-order cumulants is

$\displaystyle R_{\boldsymbol X} (n) = \sum_P \left[ \prod_{j=1}^p C_{{\boldsymbol X}_j} (n_j) \right], \hfill (26)$

which should look familiar. It is the same formula as the pure-sines-waves formula. To see this more clearly, break up the sum over the partitions into one term for $p = 1$ and one term for all the other partitions,

$\displaystyle R_{\boldsymbol X} (n) = C_{\boldsymbol X} (n) + \sum_{P, p\neq 1} \left[ \prod_{j=1}^p C_{{\boldsymbol X}_j} (n_j) \right], \hfill (27)$

or

$\displaystyle C_{\boldsymbol X} (n) = R_{\boldsymbol X} (n) - \sum_{P, p\neq 1} \left[ \prod_{j=1}^p C_{{\boldsymbol X}_j} (n_j) \right]. \hfill (28)$

### Cyclic Cumulants

The $n$th-order temporal moment function is simply

$\displaystyle R_x(t, \boldsymbol{\tau};n,m) = E^\alpha \left[ \prod_{j=1}^n x^{(*)_j}(t+\tau_j) \right], \hfill (29)$

and the $n$th-order cyclic temporal moment function is a Fourier coefficient of the (poly)periodic moment,

$R_x^\alpha(\boldsymbol{\tau};n,m) = \left\langle R_x(t,\boldsymbol{\tau};n,m) e^{-i2\pi \alpha t} \right\rangle. \hfill (30)$

The $n$th-order temporal cumulant function is given by the now-familiar combination of products of lower-order temporal moment functions,

$\displaystyle C_x (t, \boldsymbol{\tau};n,m) = \sum_{P} (-1)^{p-1}(p-1)! \prod_{j=1}^p R_x(t, \boldsymbol{\tau}_j; n_j, m_j). \hfill (31)$

Now the $n$th-order cyclic temporal cumulant function (cyclic cumulant) is just a Fourier coefficient of the temporal cumulant function

$\displaystyle C_x^\alpha (\boldsymbol{\tau}; n,m) = \sum_{P} \left[ (-1)^{p-1}(p-1)! \sum_{\boldsymbol{\beta}^\dagger \boldsymbol{1}=\alpha} \prod_{j=1}^p R_x^{\beta_j} (\boldsymbol{\tau}_j; n_j, m_j) \right]. \hfill (32)$

The vector of cycle frequencies $\boldsymbol{\beta} = [\beta_1\ \beta_2 \ \ldots \beta_p]$ is the vector of cyclic temporal moment cycle frequencies, and they must sum to the cyclic cumulant cycle frequency $\alpha.$

So that’s it. The cyclic cumulants are really pure sine-wave amplitudes, and they can be computed by combining cyclic-moment amplitudes, which are easy to estimate directly from data. And since cyclic cumulants derive from cumulants, they inherent all the well-known properties of cumulants, such as tolerance to Gaussian contamination. Most importantly for CSP, ‘cumulants accumulate,’ that is, the cumulant for the sum of $N$ independent cyclostationary signals is the sum of the cumulants for the $N$ individual signals. This property does not hold in general for the moments.

As far as I know, the pure-sine-waves derivation is the only mathematical problem whose solution is a cumulant. All other uses of the cumulant that I know about use it like a found tool. But I could very well be wrong about this, and if so, please leave a comment below.

### An Example: Rectangular-Pulse BPSK

The following two videos show estimated fourth-order cyclic temporal cumulant magnitudes for rectangular-pulse BPSK with $\tau_4$ fixed at zero, and the video frames corresponding to varying $\tau_3$. The first video is the cyclic cumulant for $\alpha = 0$ and the second is for the BPSK bit rate $\alpha = 0.1$. In both videos, two conjugations are used $(m=2)$.

### An Example: SRRC BPSK

The fourth-order cyclic cumulant analysis is repeated for BPSK with square-root raised-cosine pulses in the following two videos.

## 56 thoughts on “Cyclic Temporal Cumulants”

1. Hi, Dr.Spooner

Could you clarify why ” the odd-order moments are zero for almost all communication signals” ? If using signal model x(t) = A0exp(j2pift+phase), then the 3rd order moments x(t+tau1)x*(t+tau2)x(t+tau3) is not equal to 0.

Thanks,
Best,

Andy

• Hey Andy. So I was talking about “communication signals”, which is pretty vague. Here in your model, the signal is x(t) = A_0 exp(i 2 pi f t + i P), where A_0 is the amplitude, f is the frequency, and P is the phase. But where is the random variable(s) that conveys the information to be communicated? That random variable will determine the moments, including the third-order moment. For BPSK, P is actually a function of time that is a pulse-amplitude modulated waveform with random (+/- 1) pulse amplitudes, which leads to the zero-valued odd-order moments.

What communication signal did you have in mind? (Is A_0, f, or P a random variable, or some combination)?

• Andy says:

Ok, I got it. You talked about the signal, which is modulated signal, such as BPSK or QPSK signal, not continuous waveform, such as sine waveform.

I have another question. What is the exactly difference between higher order cumulant and higher order cyclic cumulant? The higher order cumulant is used for stationary signal, and higher order cyclic cumulant is used for cyclostationary signal?

Thanks,

Best,

Andy

• That’s essentially right. For stationary signals, the temporal cumulant is time-invariant. For cyclostationary signals, it is periodically time-varying. The Fourier coefficients of the Fourier-series expansion of that time-varying cumulant are the cyclic cumulants. However, one is free to apply the stationary-signal formula for a cumulant to cyclostationary signals. The resulting cumulant value is typically not equal to the alpha=0 cyclic cumulant, though! It gets complicated fast.

2. kawtar zerhouni says:

Thank you for the clear post.

I have a question regarding the set of cyclic temprola moment cycle frequencies, beta.

If we consider the rectangular BPSK, and we want to estimate its 4th order cyclic cumulant for alfa=0.
Knowing that its 2sd order cycle frequencies are (+/-) k/T, beta can be: {0,0}, and {- k/T, k/T}, which will sum to 0.
Then we have to substract all the 2sd order cyclic moment for this set of frequencies from the 4th order cyclic moment for alfa=0.

Is it correct?

• Hi Kawtar. I think your statement is on the right track. For C(4,0,0) (an example of C(n, m, k)), the target cycle frequency is 0, the order is 4, and the number of conjugated factors is zero. So for each partition of {1, 2, 3, 4} that matters, such as {{1, 2}, {3, 4}}, we’ll have to look at the second-order cyclic moments of the form C(2, 0, k). Those have cycle frequencies of (n – 2m)fc + k/T = 2fc + k/T. So if there are any two CFs 2fc + k1/T and 2fc + k2/T that sum to zero, we need to include those. There will be, for sure, if fc = 0, such as k1 = 1 and k2 = -1.

Agree?

• kawtar zerhouni says:

• Dennis says:

Thank you for your post. Continuing on your reply to kawtar, there could be (possibly) an infinite amount of combinations of k1 and k2 which would sum to zero. Do we need to take only a finite amount of k1 and k2? What are good criteria to decide the amount of values of k1 and k2 that need to be taken into consideration? Is this based on resulting value of the mixing product, in the sense that for higher values of k1 and k2 the contribution of the mixing product of impure sine waves is neglible?

• Thank you for the comment Dennis.

For textbook rectangular-pulse PSK/QAM, or textbook FSK, and others, the occupied signal bandwidth is theoretically infinite. So in those cases, the theory would indeed imply that you have to combine an infinite number of lower-order cyclic moments to compute a cyclic cumulant. So, in pencil-and-paper work, you should do that.

In practice, considering digital signal processing, our input signal bandwidth is limited. So if you’re unsure of the maximum of k1 and k2, you know you should stop when the value of the cycle frequency |k1/T0| cannot be supported by the sampling rate. That is, only a finite number of harmonics can exist in the data as a result of the filtering implied by capturing or simulating the signal.

In more realistic cases, where the signal clearly occupies only a fraction < 1 of the sampling band, we can isolate that band, and its width determines the largest possible cycle frequency that we need to consider. (Of course, this occupied bandwidth increases as you subject the data to nonlinearities like |.|^4, which is why the number of significant cycle frequencies grows with cumulant order n for, say, PSK). So in automated cyclic-cumulant estimators, you can take a look at the input signal bandwidth to determine how to bound k1 and k2.

Sound reasonable?

3. quyangdl says:

Hi, Dr.Spooner

Why we need to purifying Sine Waves? Does impure sine waves still have some meaning?

Best,

Andy

• Good question. Try reading the post that introduces higher-order cyclostationarity.

Sine-wave generation through the action of a nonlinear operation is fundamental to cyclostationarity. When we consider using higher-order nonlinearities (such as (.)^4), we see that the lower-order sine-wave components can contribute to the generated sine waves, but we ask how much is new to that particular nonlinearity? What can we gain over and above the lower-order contributions? That kind of question leads to the desire to purify the sine waves arising from the nonlinearity. So we call the latter sine waves impure and we attempt to purify them, leading to pure sine waves. (This is an abuse of terminology. All sine waves are “pure” sine waves in the normal sense of the word. What we really mean is “pure nth-order sine waves” and “impure nth-order sine waves”, but it gets tiring carrying around the “nth-order” all the time.)

4. AN says:

Hey Dr. Spooner,

Great post. Can you please share the code for generating the plots?
Thanks!

5. AN says:

Thanks for your reply. I am implementing this in Python according to equation (32). I wanted to know what parameter in equation 32 I should vary to obtain the plots like the ones in your video. Is it just the tau values? Also, let’s say we have don’t have the bit-rate information. How can I still find the cyclic cumulants?

• Yes, the plots in the videos were obtained by varying the tau values. One of the four, say tau_4 is fixed at zero. For the second, say tau_3, I pick a value, then allow tau_1 and tau_2 to vary, obtaining a single frame of the movie.

In general, you need to know all the lower-order cycle frequencies to estimate the nth-order cumulant. You need to know them all so that you can be sure to include all the relevant combinations of lower-order cyclic moments implied by (32). To know all the lower-order cycle frequencies, you need to know the parameters of the signal that enter the cycle-frequency formula alpha = (n-2m)fc + k*fsym and you need to know the modulation type. For n=4, the lower-order (second-order) cyclic moments for BPSK depend on both the carrier and the bit rate. For QPSK, the lower-order (second-order) cyclic moments depend only on the bit rate.

So if you don’t know the bit rate (more generally, the symbol rate), you can’t estimate and combine the proper lower-order cyclic moments. You can still find the cyclic cumulant if you first estimate the bit rate, though. We can use the spectral correlation function and the SSCA for that purpose.

• AN says:

Alright. I had one more query. In this post https://cyclostationary.blog/2015/11/05/introduction-to-higher-order-cyclostationarity/ you provide plots of cyclic cumulants for various textbook signals. Let’s take the example of QPSK. For some values [n,m] there seems to be no cumulant. The (4,1) column in QPSK is empty. Does that mean that (4,1) cumulant doesn’t exist for QPSK? Or has it not been calculated for that plot?

• You are right: for QPSK (and the QPSK-like QAM signals, such as 8QAM, 16QAM, 64QAM, etc.), the (4, 1, k) cyclic cumulant is zero. This is a consequence of the value of the cumulant for the symbol random variable. The (4,1) cumulant for a RV uniformly distributed on {i, 1, -i, -1} is zero. See the cumulant table in the post on the cyclostationarity of digital QAM/PSK. Let me know if you don’t agree with this answer!

• AN says:

Seeing the constellation plots in the post I observe that the points are within +/-1 +/- j. So the values cumulants mentioned in this post correspond to these constellation plots right? Would the cumulant values change if the constellation points were not constrained to this +/- 1 +/- j interval?

• The values of the cyclic cumulants depend explicitly on the cumulant of the symbol random variable (see Equation (4) in the DQAM/PSK post). So if the constellation points change, so does the cyclic cumulant. However, if the constellation points for signal 1 are rotated versions of those for signal 2, then the cyclic cumulant magnitudes are the same; only the cyclic cumulant phase changes. So the QPSK signal with {1, -1, i, -i} is really the same as the QPSK signal with {(sqrt(2)/2 + i*sqrt(2)/2), (-sqrt(2)/2 + i*sqrt(2)/2), (-sqrt(2)/2 – i*sqrt(2)/2), (sqrt(2)/2 – i*sqrt(2)/2}.

It’s pretty easy to compute the value of Ca(4, 1) for whatever constellation points you like using normal discrete probability. For small constellations, you can easily do this with pencil and paper. For larger constellations with irregular shapes (like 128 QAM with a cross-shaped constellation), you can easily program it.

6. Rachel says:

Hi, Dr.Spooner
In order to calculate the 4-order cyclic cumulant,can we calculate the 4-order temporal cumulant firstly according to C40(t,tau1,tau2,tau3)=E{x(t)x(t+tau1)x(t+tau2)x(t+tau3)}-E{x(t)x(t+tau1)}E{x(t+tau2)x(t+tau3)}-E{x(t)x(t+tau2)}E{x(t+tau1)x(t+tau3)}-E{x(t)x(t+tau3)}E{x(t+tau1)x(t+tau2)}.,and then make the Fourier transform of C40(t,tau1,tau2,tau3)?I am not sure whether the specific formula above is right.If it is right , the 4-order cyclic cumulant has the same relation with the 4-order cyclic moment and 2-order cyclic moment except the new cycle frequency of cumulant is the sum of the old frequency of non-zero 2-order cyclic moment?Can we replace operator E{alpha} by E{.} when simulating?
Thanks.

• In order to calculate the 4-order cyclic cumulant,can we calculate the 4-order temporal cumulant firstly according to C40(t,tau1,tau2,tau3)=E{x(t)x(t+tau1)x(t+tau2)x(t+tau3)}-E{x(t)x(t+tau1)}E{x(t+tau2)x(t+tau3)}-E{x(t)x(t+tau2)}E{x(t+tau1)x(t+tau3)}-E{x(t)x(t+tau3)}E{x(t+tau1)x(t+tau2)}.,and then make the Fourier transform of C40(t,tau1,tau2,tau3)?

Yes, if you use the expectation operator E{.} correctly. In the fraction-of-time probability framework, E{.} in your formula should be the operator $E^\alpha{.}$, the sine-wave extraction operator. In the stochastic framework, E{.} is the normal expectation, but one must be careful not to include phase-randomizing random variables (such as a uniformly distributed symbol-clock phase variable), else cycloergodicity is lost.

If it is right , the 4-order cyclic cumulant has the same relation with the 4-order cyclic moment and 2-order cyclic moment except the new cycle frequency of cumulant is the sum of the old frequency of non-zero 2-order cyclic moment?

I don’t understand this question. Specifically, I don’t understand “has the same relation”.

Can we replace operator E{alpha} by E{.} when simulating?

No. When you are performing an estimation task using a sample path of the process (a realization of the random signal), you must either use the sine-wave extraction (FOT expectation) to obtain the time-varying moments and cumulants, or focus on a single cyclic cumulant and use the cyclic-cumulant/cyclic-moment formula (Eq (32) in the cyclic-cumulant post).

When you focus on a single cyclic cumulant, you need only estimate the cyclic moments that have cycle frequencies that sum to the cyclic-cumulant cycle frequencies. When you use the TCF, you need to estimate all the lower-order TMFs, which means accounting for all lower-order (moment; impure) cycle frequencies.

Does that help?

• Rachel says:

When computing the 4-order and 2-oder temporal moments (the vector tau =0)of the signal such as BPSK and QPSK,or the sum of them ,I use the expectation operator E{} in stochastic process directly instead of E{alpha}.The reason is that when the signal takes the analytical form,that is,x(t)=a(t)*exp(j*2*pi*fc*t) where a(t) is the complex envelope,x(t).^4 or x(t).^2 is periodic.So the 4-order and 2-oder temporal moments are equal to E{x(t).^4} and E{x(t).^2}. Then we can compute the temporal cumulant function using them and Fourier transform it to get the cyclic cumulant function.In the frequency domain ,we can find several non-zero impulses.I don’t ensure whether it is right.Why I do not compute the cyclic moments firstly and then plug them into the C-M formula is that this method just calculates the single cyclic cumulant in a single frequency such as f=4*fc at a time ,the process is relatively complicated and we can not have a visual graph.I think the first method is easier to recognize the signal that is the sum of two different signals,because we can detect the peaks in the frequency domain and compare them with the theoretical values.
So, do you agree with me?

• When computing the 4-order and 2-oder temporal moments (the vector tau =0)of the signal such as BPSK and QPSK,or the sum of them ,I use the expectation operator E{} in stochastic process directly instead of E{alpha}.

So here when you use the word “computing,” do you mean calculating with a pencil and paper? If so, it seems fine.

The reason is that when the signal takes the analytical form,that is,x(t)=a(t)*exp(j*2*pi*fc*t) where a(t) is the complex envelope,x(t).^4 or x(t).^2 is periodic.So the 4-order and 2-oder temporal moments are equal to E{x(t).^4} and E{x(t).^2}.

I don’t think $x(t)^4$ is periodic here unless $a(t)$ has some special properties, such as it is periodic or it is a random binary or quaternary pulse train with rectangular pulses. In general, $x(t)^4$ for your signal model contains both a periodic component and a random, aperiodic component. The expectation (whether stochastic or FOT) delivers the periodic component because the random aperiodic component generally has zero average value.

I think the confusion between us is that I don’t know if you are talking about doing theory with math, pencil, and paper, or are talking about doing estimation, using computer programs and a sample path of the signal.

I know of no shortcuts to getting to the TCF. If you want the TCF, you need to consider all of its Fourier components (the CTCFs, or cyclic cumulants). Either you get those one by one, following the CTCF/CTMF moments-to-cumulants formula, or you get each TMF, one by one, combining all the CTMFs, and then you combine the TMFs usnig the TCF/TMF moments-to-cumulants formula.

Whatever estimation method you use, you should see that the TCF for BPSK is different from the TCF for QPSK. The TCF for the sum of BPSK and QPSK is the sum of the TCFs for each, so that TCF is also different from the BPSK TCF and different from the QPSK TCF.

Are we converging?

• Rachel says:

In fact ,my purpose is to estimate the cyclic cumulant of the sum of BPSK and QPSK using a sample path of the sum where the signals take the analytical forms x(t)=a(t)*exp(j*2*pi*fc*t),a(t) includes information about signal amplitude , transmitted symbol and signaling pulse shape.I want to estimate Cx(0;4,0) by using fft(E(x.^4)-3*E(x.^2).E(x.^2)) ,because I find it is a common feature in modulation recognition .Then the figure indeed appear impulses at {4*fc1+-Rs for BPSK and 4*fc2+-Rs for QPSK,Rs is the symbol rate}，and the amplitudes at 4*fc1 and 4*fc2 are about1.3 and 0.65 .Is this estimation method right?And I am not sure whether E(x.^4)-3*E(x.^2).E(x.^2) is an right estimation of temporal cumulant of signal x using matlab where the E() is stochastic expectation.I think it not necessary to use the real E{alpha] to estimate TMF and TCF of common communication signals such as MPSK and MQAM,because it seems some complicated in practice.

• I want to estimate Cx(0;4,0) by using fft(E(x.^4)-3*E(x.^2).E(x.^2))

I think you are on the right track, but you are mixing up estimates from data with probabilistic parameters derived from mathematical models. You can’t use MATLAB to implement $E(x.^4)$ unless you create an ensemble within MATLAB! I bet that within MATLAB you are just doing fft(x.^4 – 3 (x.^2) .* (x.^2)) followed by peak-picking.

• But be careful here. fft(x.^4 – 3*(x.^2) .* (x.^2)) is not the cyclic cumulant. Since (x.^2) .* (x.^2) = (x.^4), its peaks are just the locations of the cycle frequencies for the cyclic moment. We are having trouble converging on the actual difficulty here due to the distraction of the E[] operator. I don’t think there is any shortcut to computing the cyclic cumulants. You have to properly estimate the cyclic moments and combine them properly too.

7. Dennis says:

Dear Mr. Spooner

I’m still having some difficulties understanding equations (31) and (32) from this post and your dissertation. is the cyclic temporal cumulant function (32) just a Fourier transform of the temporal cumulant function (31)? If yes, does this relation also holds for an arbitrary choice of the tau vector [tau1…taun]?

So for example, If I look at one of your answers on a previous post for calculating the (4,2) cyclic cumulant:

fft(x1*x2*conj(x3)*conj(x4) – Rx(t, [t1, t2]; 2, 0)*Rx(t, [t3, t4]; 2, 2) – Rx(t, [t1, t3]; 2, 1)Rx(t, [t2, t4]; 2, 1) – Rx(t, [t1, t4]; 2, 1)Rx(t, [t2, t3]; 2, 1))

Does this derivation mean that this equation holds for every choice of the delay vector tau? In other words, does equation (32) imply that we only end up with pure sine waves by combining lower cyclic moment amplitudes, even for an arbitrary choice of the delay vector tau?

I also have a question related tot this post:
https://cyclostationary.blog/2016/08/22/cyclostationarity-of-digital-qam-and-psk/

I have implemented the CTCF in Matlab where I tried to reproduce the plot where you choose the parameters to be: alpha=f_sym, (n,m)=(4,2), tau=[tau1 0 0 0]. Intuitively, for BPSK, I would expect that for tau1=0 we have a cyclic cumulant magnitude of 0 at alpha=f_sym. The reason is that by calculating the (4,2) cyclic cumulant we lose the phase information of the signal. By looking at the spectral information (Fourier components) I will only see a peak at alpha=0. For tau1 not equal to 0 I indeed see cycle frequencies at alpha=k*f_sym. Is my reasoning correct?

I’m an undergraduate student working on cyclostationary signal processing and modulation classification. I tried to implement some of the work from this blog and your dissertation in Matlab and your work is really helping me. Thanks for that!

Dennis

• Awesome comment and questions Dennis!

is the cyclic temporal cumulant function (32) just a Fourier transform of the temporal cumulant function (31)?

No, the CTCF is a Fourier-series component (coefficient) of the TCF. If we represent the (almost) periodic TCF in a Fourier series, we end up with the usual sum involving weighted complex-valued sine waves. The complex-valued Fourier coefficient of one of those sine waves is the CTCF, and the Fourier-series frequency for the sine wave is called the (pure) cycle frequency.

If yes, does this relation also holds for an arbitrary choice of the tau vector [tau1…taun]?

Yes, (31) and (32) hold for arbitrary delay vectors $\boldmath{\tau}$. You have to be careful in combining the correct lower-order CTMFs. You have to make sure that you are using the correct subset of the $\{\tau_1\ \tau_2\ \ldots \tau_n\}$ for each partition.

So for example, If I look at one of your answers on a previous post for calculating the (4,2) cyclic cumulant:

fft(x1*x2*conj(x3)*conj(x4) – Rx(t, [t1, t2]; 2, 0)*Rx(t, [t3, t4]; 2, 2) – Rx(t, [t1, t3]; 2, 1)Rx(t, [t2, t4]; 2, 1) – Rx(t, [t1, t4]; 2, 1)Rx(t, [t2, t3]; 2, 1))

Does this derivation mean that this equation holds for every choice of the delay vector tau?

Where did I write that? I forget. Sorry! But the first term in that expression has to have the correct delays $t_1, t_2, t_3, t_4$. In other words, $x_1$ has to be a delayed/advanced version of $x$ with delay/advance $t_1$, etc. The expression isn’t enough by itself to determine correctness because you didn’t include the definitions of $x_1, x_2, x_3, x_4$ (how they relate to $x$).

I have implemented the CTCF in Matlab where I tried to reproduce the plot where you choose the parameters to be: alpha=f_sym, (n,m)=(4,2), tau=[tau1 0 0 0]. Intuitively, for BPSK, I would expect that for tau1=0 we have a cyclic cumulant magnitude of 0 at alpha=f_sym. The reason is that by calculating the (4,2) cyclic cumulant we lose the phase information of the signal. By looking at the spectral information (Fourier components) I will only see a peak at alpha=0. For tau1 not equal to 0 I indeed see cycle frequencies at alpha=k*f_sym. Is my reasoning correct?

But why would “losing the phase information of the signal” necessarily result in a zero-valued cyclic cumulant?

Remember that the plot you are referring to corresponds to PSK/QAM with square-root raised-cosine pulses. If you use rectangular pulses, then yes, for a delay vector of all zeros, the cyclic cumulant for cycle frequency equal to the bit rate is zero. (Textbook signals ruining everything, again.)

So, when you were “looking at the spectral information” of your BPSK signal, what signal, exactly, did you use, and what transformation, exactly, did you apply to the signal before Fourier analysis?

8. Rao Ramesh says:

Hello Dr. Spooner, Can you please send me the code for generating fourth order cyclic cumulant for a signal. Thanks!

9. Andy says:

Hi, Dr Spooner

I feel confuse about the Equation (32) that if we estimate n = 6, m = 0 order Cyclic cumulant.
One of partition is {{1,2} , {3,4}, {5, 6}} , based on Equation 32, we need add them into the Cyclic cumulant result, because (-1)^(p-1) = (-1)^(3-1) = 1 , which is a positive number. However, Why do we need to add it instead of subtracting it?

Thanks

• You appear to believe that the product of three second-order moments should be subtracted instead of adding. Why do you believe that?

In the post, I go through the idea of purifying a sine wave that is generated by an $n$th-order homogeneous nonlinearity. I argue that the $n$th-order purified sine wave should be the impure sine wave with all possible combinations of pure lower-order sine-wave components subtracted off. That formula is recognized as a cumulant. We can then use the previously established moment/cumulant formulas to express the cumulant in terms of only impure lower-order sine waves. That leads to the formula (32).

I believe the intuitive answer is that the different products of lower-order impure (moment) sine waves contain several instances of lower-order pure (cumulant) sine waves, so to properly account for the pure sine waves in the subtraction process, you’ll have to weight the various products accordingly.

This site uses Akismet to reduce spam. Learn how your comment data is processed.