# More on Pure and Impure Sine Waves

Gaussian and binary signals are in some sense at opposite ends of the pure-impure sine-wave spectrum.

Remember when we derived the cumulant as the solution to the pure $n$th-order sine-wave problem? It sounded good at the time, I hope. But here I describe a curious special case where the interpretation of the cumulant as the pure component of a nonlinearly generated sine wave seems to break down.

Let’s consider a very simple cyclostationary signal, the baseband binary pulse-amplitude modulated (PAM) signal with rectangular pulses. This signal is a building block for a slightly more complicated and slightly more realistic signal, the rectangular-pulse BPSK signal. The latter builds on the binary PAM signal by adding a non-zero carrier offset frequency, a symbol-clock phase, and carrier phase. But overall the binary PAM signal is pretty much the same thing as a BPSK signal.

Here is a graphical representation of our binary PAM signal:

We know this signal has non-conjugate cycle frequencies equal to harmonics of $1/T_0$ and since it is real-valued, its conjugate cycle frequencies are identical to its non-conjugate cycle frequencies.

Let’s now take a look at the fourth-order parameters for the signal. First, consider the fourth-order temporal moment function with equal delays $\tau_j = D$,

$\displaystyle R_x(t, \boldsymbol{\tau}; 4,m) = E \left[ \prod_{j=1}^4 s^{(*)_j} (t+D) \right]. \hfill (1)$

The value of $m$ doesn’t matter here because $s(t)$ is real-valued, so we have

$\displaystyle R_x(t, \boldsymbol{\tau}; 4,m) = E \left[ \prod_{j=1}^4 s(t+D) \right] = E \left[ s^4(t + D) \right] = E \left[ 1 \right] = 1. \hfill (2)$

This moment function is trivially periodic, and so has a Fourier series expansion with a single coefficient. In other words, it is equivalent to a sine wave with zero frequency, unit amplitude, and zero phase. There is no symbol-rate fourth-order cyclic moment (no impure sine wave with non-zero frequency), nor any other non-trivial cyclic moment for the case of all delays being equal. Moreover, if we were to look at the second-order temporal moments with equal delays, they are also equal to one. So no non-trivial lower-order sine waves exist either.

What about when the delays are not all equal?

Let’s now consider the delay set $[\tau_1\ \ \tau_2\ \ \tau_3\ \ \tau_4] = [0\ \ 0\ \ -T_0/2\ \ -T_0/2]$.

$\displaystyle R_x(t, \boldsymbol{\tau}; 4,m) = E \left[ s(t)s(t) s(t-T_0/2)s(t-T_0/2) \right]. \hfill (3)$

Now we know that $s(t)s(t) = 1$ and $s(t+D)s(t+D) = 1$ so that, once again,

$\displaystyle R_x(t, \boldsymbol{\tau}; 4,m) = E \left[ 1 \right] = 1. \hfill (4)$

Here, again, there are no impure sine waves with non-zero frequencies such as the symbol rate $1/T_0$. What are the pure sine waves? Are they also absent?

Because the signal is real-valued, and we are considering only the fourth-order cumulant, our general moments-to-cumulants formula reduces to

$\displaystyle C_s(t, [0\ 0\ -T_0/2\ -T_0/2]; 4,m) = R_s(t, [0\ 0\ -T_0/2\ -T_0/2]; 4,m)$

$\displaystyle -R_s(t, [0\ 0]; 2, \cdot)R_s(t, [-T_0/2\ -T_0/2]; 2, \cdot) - 2[R_s(t, [0\ -T_0/2]; 2, \cdot)]^2. \hfill (5)$

From our previous analysis (for example (4)), we can replace several of these terms with unity:

$\displaystyle R_s(t, [0\ 0\ -T_0/2\ -T_0/2]; 4,m) = 1, \hfill (6)$

$\displaystyle R_s(t, [0\ 0]; 2, \cdot) = 1,\hfill (7)$

$\displaystyle R_s(t, [-T_0/2\ -T_0/2]; 2, \cdot) = 1, \hfill (8)$

so that our temporal cumulant becomes

$\displaystyle C_s(t, [0\ 0\ -T_0/2\ -T_0/2]; 4,m) = 1 - (1)(1) - 2[R_s(t, [0\ -T_0/2]; 2, \cdot)]^2, \hfill (9)$

or

$\displaystyle C_s(t, [0\ 0\ -T_0/2\ -T_0/2]; 4,m) = - 2[R_s(t, [0\ -T_0/2]; 2, \cdot)]^2. \hfill (10)$

The question then becomes: What is $R_s(t, [0\ -T_0/2]; 2, \cdot)$? The periodic component of the lag product $s(t) s(t - T_0/2)$ is a square wave with period $T_0$ and duty cycle of $1/2$. We can see that graphically:

So the second-order moment $R_s(t, [0\ -T_0/2]; 2, \cdot)$ is equal to that square wave (green lines), which can be expressed as a Fourier series. Let’s do that:

$\displaystyle c_j = \frac{1}{T_0} \int_{0}^{T_0} r_{T_0/2}(t-T_0/4) e^{-i 2 \pi j t/T_0}\, dt, \hfill (11)$

$\displaystyle c_j = \frac{1}{T_0} \int_{0}^{T_0/2} (1) e^{-i 2 \pi j t / T_0} \, dt. \hfill (12)$

For $j = 0$, $c_j = c_0 = 1/2$. For all other $j$,

$\displaystyle c_j = \frac{1}{T_0} \left. \frac{e^{-i 2 \pi j t/T_0}}{-i 2 \pi j/T_0} \right|_{0}^{T_0/2}. \hfill (13)$

We end up with

$\displaystyle c_j = \frac{1}{2} e^{-i \pi j/2} sinc (j/2), \ \ \ j \neq 0. \hfill (14)$

In particular, $|c_1| = 0.5 sinc(1/2) = 0.318$. This means that the cyclic cumulant $C_s^{1/T_0} ([0\ 0 \ -T_0/2\ -T_0/2]; 4, m)$ is not zero. That is, there is a pure fourth-order sine wave with frequency equal to the symbol rate $1/T_0$ for the specified delay vector. But we already saw that there is no impure fourth-order sine wave with frequency equal to the symbol rate for that same delay vector.

Which is strange.

Let’s consider the moments and cumulants of a Gaussian signal for a moment. Recall that the $n$th-order cumulants of a Gaussian random variable, set of jointly Gaussian random variables, or a Gaussian process (signal) are zero for $n \ge 3$. So if we have a Gaussian signal $x(t)$, we can write the following equation

$\displaystyle C_x(t, \boldsymbol{\tau}; n,m) = \sum_{P = \{\nu_k\}_{k=1}^p} (-1)^{p-1}(p-1)! \prod_{j=1}^p R_{x_{\nu_j}}(t, \boldsymbol{\tau}_{\nu_j}; n_j, m_j), \hfill (15)$

where the outer sum is the sum over all the distinct partitions of the index set $\{1, 2, \ldots, n\}$, each partition consists of $p$ disjoint subsets of $\{1, 2, \ldots, n\}$ denoted by $\nu_k$, the union of the $\nu_k$ is the index set $\{1, 2, \ldots, n\}$, $|\nu_k| = n_k$, and $m_k$ optional conjugations are included in $\nu_k$. (All that is review from this post.) Now consider $n \ge 3$ and for our Gaussian $x(t)$ we have

$\displaystyle 0 = \sum_{P = \{\nu_k\}_{k=1}^p} (-1)^{p-1}(p-1)! \prod_{j=1}^p R_{x_{\nu_j}}(t, \boldsymbol{\tau}_{\nu_j}; n_j, m_j), \hfill (16)$

or

$\displaystyle R_{x}(t, \boldsymbol{\tau}; n,m) = - \sum_{P = \{\nu_k\}_{k=1}^p, p\neq 1} (-1)^{p-1}(p-1)! \prod_{j=1}^p R_{x_{\nu_j}}(t, \boldsymbol{\tau}_{\nu_j}; n_j, m_j). \hfill (17)$

If we look at a single Fourier component of this time-varying moment (that is, look at a specific cyclic moment), we obtain

$\displaystyle R_x^\beta (\boldsymbol{\tau}; n,m) = -\sum_{P=\{\nu_k\}_{k=1}^p, p\neq 1} \left[ \sum_{\boldsymbol{\alpha} \boldsymbol{1}^\dagger = \beta} (-1)^{p-1}(p-1)! \prod_{j=1}^p R_{x_{\nu_j}}^{\alpha_j} (\boldsymbol{\tau}_{\nu_j}; n_j, m_j) \right]. \hfill (18)$

In other words, if you know all the lower-order cyclic moments with orders less than $n$, you know the $n$th-order cyclic moment.

Now consider some even-order temporal moment function for our binary PAM signal $s(t)$, but also constrain the delay vector to be of the form

$\displaystyle \boldsymbol{\tau} = [D_1\ D_1\ D_2\ D_2\ \ldots D_q\ D_q] \triangleq \boldsymbol{\lambda}. \hfill (19)$

The moment function can be expressed in terms of the cumulant functions

$\displaystyle R_s(t, \boldsymbol{\lambda}; n, m) = 1 = \sum_{P=\{\nu_k\}_{k=1}^p} \prod_{j=1}^p C_{s_{\nu_j}}(t, \boldsymbol{\lambda}_{\nu_j}; n_j, m_j). \hfill (19)$

Rearranging this equation yields an expression for the $n$th-order cumulant

$\displaystyle C_s(t, \boldsymbol{\lambda}; n,m) = 1 - \sum_{P=\{\nu_k\}_{k=1}^p, p\neq 1} \prod_{j=1}^p C_{s_{\nu_j}}(t, \boldsymbol{\lambda}_{\nu_j}; n_j, m_j). \hfill (20)$

If we look at a Fourier component of this time-varying cumulant with $\beta \neq 0$, we get the cyclic cumulant,

$\displaystyle C_s^\beta (\boldsymbol{\lambda}; n,m) = - \sum_{P=\{\nu_k\}_{k=1}^p, p\neq 1} \left[ \sum_{\boldsymbol{\alpha} \boldsymbol{1}^\dagger = \beta} \prod_{j=1}^p C_{s_{\nu_j}}^{\alpha_j}(\boldsymbol{\lambda}_{\nu_j}; n_j, m_j) \right]. \hfill (21)$

In other words, if you know all the lower-order cyclic cumulants with orders less than $n$, you know the $n$th-order cyclic cumulant.

In some sense, then, the binary rectangular-pulse PAM signal is the dual of the Gaussian signal. They play special roles in the interplay between cyclic moments and cyclic cumulants. Gaussian signals have minimum cyclic cumulants and cyclic moments with a sort of redundancy, whereas the binary PAM signals have minimum cyclic moments (under the $\boldsymbol{\lambda}$ constraint) and cyclic cumulants with a sort of redundancy.

I get the feeling I’m missing something fundamental here; some way to see these facts as aspects of a bigger idea. If you see anything, or have corrections or issues with the post, I encourage you to leave a comment below.

## Author: Chad Spooner

I'm a signal processing researcher specializing in cyclostationary signal processing (CSP) for communication signals. I hope to use this blog to help others with their cyclo-projects and to learn more about how CSP is being used and extended worldwide.