# CSP Estimators: Cyclic Temporal Moments and Cumulants

How do we efficiently estimate higher-order cyclic cumulants? The basic answer is first estimate cyclic moments, then combine using the moments-to-cumulants formula.

In this post we discuss ways of estimating $n$-th order cyclic temporal moment and cumulant functions. Recall that for $n=2$, cyclic moments and cyclic cumulants are usually identical. They differ when the signal contains one or more finite-strength additive sine-wave components. In the common case when such components are absent (as in our recurring numerical example involving rectangular-pulse BPSK), they are equal and they are also equal to the conventional cyclic autocorrelation function provided the delay vector is chosen appropriately.

The more interesting case is when the order $n$ is greater than $2$. Most communication signal models possess odd-order moments and cumulants that are identically zero, so the first non-trivial order $n$ greater than $2$ is $4$. Our estimation task is to estimate $n$-th order temporal moment and cumulant functions for $n \ge 4$ using a sampled-data record of length $T$.

As is often the case in cyclostationary signal processing, there are two quite different situations in which one wishes to perform an estimate: blind and non-blind. In particular, we either have prior knowledge of the cycle frequencies exhibited by the data (non-blind) or we do not (blind). Some people also consider the case where we have partial prior knowledge of cycle frequencies, such as knowing the symbol rate but not the carrier offset frequency, and they term this partially blind processing.

### Signals and Scenes Used in this Post

We focus on BPSK with independent and identically distributed bits and a square-root raised-cosine pulse function with roll-off of $1$ (excess bandwidth of $100$%). In other words, textbook BPSK.

There are two scenes. The first one is just a single BPSK signal in noise; we’ll call it BPSK1. The symbol rate is $1/10$ (normalized frequencies are used throughout this post), the carrier offset frequency is $0.01$, and the total signal power is $1$ ($0$ dB). The second scene adds another BPSK signal to the first signal; let’s call it BPSK2. The bit rate, carrier offset frequency, and power level for this interfering BPSK signal are $1/11$, $0.002$, and $1.6 (2$ dB $).$

Estimated PSDs for these two scenes are shown below to set the stage:

### Non-Blind Estimation of Temporal Moments and Cumulants

This kind of estimation is a straightforward application of the formulas that define the temporal moments, temporal cumulants, cyclic temporal moments, and cyclic temporal cumulants. We can simply implement the finite-time versions of the formulas that define these quantities. In particular, since we know all the cycle frequencies that are exhibited by the data under study, we can estimate any $n$th-order cyclic moment function by using a discrete Fourier transform.

Recall that the theoretical $n$th-order cyclic moment function is given by

$\displaystyle R_x^\alpha (\boldsymbol{\tau}; n,m) = \lim_{T\rightarrow\infty} \frac{1}{T} \int_{-T/2}^{T/2} \prod_{j=1}^n x^{(*)_j} (t + \tau_j) e^{-i 2 \pi \alpha t} \, dt \hfill (1)$

So a simple non-parametric estimator for the cyclic moment function is just a non-limiting version of this formula,

$\displaystyle \hat{R}_x^\alpha (\boldsymbol{\tau}; n,m) = \frac{1}{T} \int_{-T/2}^{T/2} \prod_{j=1}^n x^{(*)_j} (t + \tau_j) e^{-i 2 \pi \alpha t} \, dt \hfill (2)$

which is easy to implement in discrete time.

The $n$th-order temporal moment function is simply the Fourier series involving all the non-zero (corresponding to the choices made for $\boldsymbol{\tau}$, $n$, and $m$) $n$th-order cyclic temporal moment functions,

$\displaystyle \hat{R}_x(t, \boldsymbol{\tau}; n,m) = \sum_\alpha \hat{R}_x^\alpha (\boldsymbol{\tau}; n,m) e^{i 2 \pi \alpha t} \hfill (3)$

Given that we know all the impure nth-order cycle frequencies $\alpha$, we can estimate the time-varying temporal moment function. Suppose we know the impure cycle frequencies for all orders $1, 2, \ldots, n$ and all choices of the number of conjugated factors $m$. Then we can estimate the temporal moment functions for all orders less than or equal to $n$. This set of temporal moment functions can then be combined using the Shiryaev-Leonov moment-to-cumulant formula to estimate the temporal cumulant function

$\displaystyle \hat{C}_x(t, \boldsymbol{\tau}; n,m) = \sum_{P} (-1)^{p-1} (p-1)! \prod_{j=1}^p \hat{R}_x(t, \boldsymbol{\tau}_j; n_j, m_j) \hfill (4)$

where the set $P$ contains all unique partitions of the index set $\{1, 2, \ldots, n\}$, $\{ \nu_j \}_{j=1}^p$ (see the post on cyclic cumulants for far too many details).

Now since we are still talking about non-blind estimation, we also know all pure cycle frequencies for all the relevant orders, so we can estimate the cyclic temporal cumulant function (cyclic cumulant) for pure cycle frequency $\alpha$ by extracting the Fourier-series coefficient for $\alpha$ from $\displaystyle \hat{C}_x(t, \boldsymbol{\tau}; n,m).$

Alternatively, we can use the cyclic-moment-to-cyclic-cumulant formula directly, estimating each cyclic moment separately and combining them according to

$\displaystyle \hat{C}_x^\alpha (\boldsymbol{\tau}; n,m) = \sum_{P} \left[ (-1)^{p-1}(p-1)! \sum_{\boldsymbol{\beta}^\dagger \boldsymbol{1}=\alpha} \prod_{j=1}^p \hat{R}_x^{\beta_j} (\boldsymbol{\tau}_j; n_j, m_j) \right]. \hfill (5)$

#### Non-Blind Estimation for a Single BPSK Signal in Noise

Using the method (5) of tediously combining all the proper estimated lower-order cyclic moments on the single-BPSK signal scene yields the following result:

Here we are plotting the warped cyclic-cumulant magnitudes. Each $n$th-order cyclic cumulant is raised to the power $2/n$ (independently of $m$). This warping is discussed in the first post on modulation recognition. In the present context, it renders the plots easier to grasp when multiple cyclic-cumulant orders are shown because the dynamic range of the cyclic cumulants can be quite large–this dynamic range is reduced through the warping. The theoretical cyclic cumulants are relatively easy to compute from our closed-form formula for PSK/QAM:

The error in the estimate is small for this simple case. The error is defined as the absolute value of the difference between the $n$th-order estimate and the $n$th-order theoretical cyclic cumulant divided by the maximum theoretical cyclic-cumulant magnitude for all $(n, m)$ cyclic cumulants for the signal. This measure tends to de-emphasize the errors in the $(n,m,k)$ cyclic cumulants for the larger $k$, which have small values relative to $(n,m,0)$ cyclic cumulants. Here $k$ refers to the harmonic number in the general PSK/QAM cycle-frequency formula $\alpha(n,m,k) = (n-2m)f_c + k/T_0$, where $f_c$ is the carrier frequency and $1/T_0$ is the symbol rate.

#### Non-Blind Estimation for a Two Cochannel BPSK Signals in Noise

Now let’s repeat the non-blind cyclic-cumulant estimation for BPSK1 + BPSK2 + noise:

Notice here that almost all the cyclic-cumulant magnitudes are as expected from the theoretical cyclic cumulants for BPSK, except for those corresponding to $(n, m, k) = (n, n/2, 0)$. That is, those cyclic cumulants corresponding to $(2, 1, 0), (4, 2, 0),$ and $(6, 3, 0)$. The $(2, 1, 0)$ cyclic cumulant is the standard autocorrelation function, and therefore reflects the presence of all signals in the data, including the additive white Gaussian noise. It should have a value equal to the sum of the average power levels of all the signals in the data, which is equal to the average power of the data itself: $P = 1.0 + 1.58 + P_N = 2.6.$ We processed the data by first applying the band-of-interest detector, which automatically finds the occupied band and filters the out-of-band noise away prior to cyclic-cumulant estimation. Therefore, the remaining Gaussian noise in the processed data is approximately $P_N = N_0 B = 0.1 (2/10) = 0.02$. So the value of the $(2, 1, 0)$ cyclic cumulant shown in each of the two plots above is correct. (Note that the warping is inconsequential for $n=2$ because $2/n = 2/2 = 1$.)

The $(4, 2, 0)$ and $(6, 3, 0)$ cyclic cumulants reflect the presence of all non-Gaussian signals components in the data, because higher-order cumulants for Gaussian variables and processes are zero. But what are their theoretical values here? The $(4, 2, 0)$ cyclic-cumulant magnitude for our 100-percent EBW unit-power BPSK1 is equal to $2.28$. Therefore, the cyclic-cumulant magnitude for 2-dB BPSK2 is equal to $2.28 (1.58^2) = 5.69$. The sum of the cyclic-cumulant magnitudes is then $2.28 + 5.69 = 7.97$. The warped version of this magnitude is $7.97^{(2/4)} = 2.82$, which is what we see in the graphs. Similarly, the warped $(6, 3, 0)$ cyclic cumulant should be about $4.9$.

Now, what happens when we try to estimate the cyclic cumulants for BPSK1 in the data BPSK1 + BPSK2, but only use the lower-order cycle frequencies for BPSK1? (That is, ignore all the cycle frequencies for BPSK2 in the formula (5) above.) We obtain the warped cyclic cumulants and corresponding relative error plots shown in Figures 8 and 9.

The errors are largely constrained to the $(n, n/2, 0)$ cyclic cumulants. This is because the cyclic moment function for $(n, n/2, 0)$ contains contributions from various combinations of BPSK2’s lower-order cycle frequencies that add up to zero, and these are not accounted for in (5) when we use only BPSK1’s lower-order cycle frequencies. If BPSK2 had parameters such that combinations of its lower-order cycle frequencies, or combinations of its and BPSK1’s lower-order cycle frequencies, added to a BPSK1 cycle frequency, we’d see additional large errors in the plot above.

### Blind Estimation of Temporal Moments and Cumulants

In blind estimation we don’t know any of the cycle frequencies exhibited by the data, except of course the pure cycle frequencies $\alpha = 0$ for $(n,m,k) = (n, n/2, 0)$, which all non-Gaussian power signals possess. So we have to jointly or sequentially estimate the cycle frequencies and the cyclic moments/cumulants.

How might we go about blindly estimating the cycle frequencies for some arbitrary data record?

1. For $n=2$, we have already seen that the thresholded spectral coherence function is a good way to blindly estimate both non-conjugate and conjugate cycle frequencies. That’s a good start, but we also already know that key cycle frequencies don’t show up for many signal types until order $n=4$ (such as QPSK). So we have to do something else to get at these higher-order cycle frequencies.
2. We can find impure cycle frequencies by finding the periodic components in the outputs of particular nonlinear operations on the data, such as $(\cdot)^4$. More on this later, in the section below on partially blind cumulant estimation.
3. We can apply synchronized averaging to the outputs of particular nonlinear operations. This is an interesting approach, both because synchronized averaging is under-appreciated and because the output corresponds to the time-varying temporal moment function itself, rather than a cyclic moment or cyclic cumulant. So, let’s look at the application of synchronized averaging to the cyclic-moment and cyclic-cumulant estimation problem. I think along the way this might help solidify some of the concepts in higher-order cyclostationarity.

#### Synchronized Averaging Applied to Time-Varying Moment/Cumulant Estimation

First let us define synchronized averaging (see also The Literature [1] and related work by Gardner). Suppose we have some power signal $x(t)$ which is defined for all time. For period $T_p$, the synchronized averaging operation is

$\displaystyle A_x(t, T_p) = \lim_{K\rightarrow\infty} \frac{1}{2K} \sum_{k=-K}^K x(t + kT_p)r_{T_p} (t + kT_p), \hfill (6)$

for $t \in [0, T_p]$. In (6), $r_{T_p}(t)$ is the rectangle function

$\displaystyle r_{T_p}(t) = \left\{ \begin{array}{ll} 1, & -T_p/2 \leq t < T_p/2 \\ 0, & \mbox{\rm otherwise} \end{array} \right. \hfill (7)$

So, for example, $A_x(0, T_p)$ is the average of $\{x(kT_p)\}$ over all integers $k$. If there is a periodic component of $x(t)$ that has period $T_p$ or a period that is a submultiple of $T_p$, it will survive the averaging operation in (6). If not, the average will be zero.

If the periodic component in $x(t)$ is a sine wave, then you’re probably thinking why not just Fourier transform $x(t)$ and look for the largest peak? And that’s fine, and we do that all the time in CSP and signal processing in general, including in a later part of this post. But if the periodic component is “sophisticated” or “rich”, meaning that its Fourier series has many components with similar Fourier-coefficient magnitudes, Fourier analysis can be tricky in terms of deciding what is a significant peak and what isn’t (a thresholding problem).

In practice, we can implement (6) in discrete time and for finite $K$ determined by the length of the available data record.

So how does this apply to blind estimation of temporal moments and cumulants for a cyclostationary signal?  Recall that an $n$th-order temporal moment function is just the (almost) periodic component of a homogeneous nonlinear transform of the signal,

$\displaystyle R_x(t, \boldsymbol{\tau}; n,m) = E^{\{\alpha\}} \left[ \prod_{j=1}^n x^{(*)_j} (t + \tau_j) \right]. \hfill (8)$

If we apply the synchronized averaging operation to the delay product in (8), ideally we should obtain $R_x(t, \boldsymbol{\tau}; n,m)$. That is, we obtain the whole time-varying moment function itself in one shot.

However … and you knew there would be complications … the synchronized averaging operation requires us to select the parameter $T_p$, which is presumably related to the cycle frequencies exhibited by $x(t)$, which we are saying we don’t know (blind processing context). So how do we choose $T_p$?

What follows is an approach that builds on the fundamental tool of synchronized averaging, but is still fully blind. Let’s call the finite-$K$ synchronized average estimate $A_x(t, T_p, K)$. A periodized version of this signal is built by concatenating multiple instances of $A_x(t, T_p, K)$ until the length of the result is just greater than the length $T$ of the data record $x(t)$. The periodized waveform is denoted by $Z_x(t, T_p, K)$.

The power of the error between $x(t)$ and $Z_x(t, T_p, K)$ can be used to assess whether the signal $A_x(t, T_p, K)$ captures an actual periodic component in $x(t)$ or not. If $A_x(t, T_p, K)$ represents an actual periodic component in $x(t)$, then subtracting its periodized version from $x(t)$ will produce a signal with smaller power than $x(t)$. Before applying to our BPSK data records, let’s look at a simple case to fix the ideas.

The data record for our simple illustrative case is a single sine wave in white Gaussian noise; the sine wave and the noise each have unit power and the frequency of the sine wave is $1/600$ (normalized times and frequencies are used, as nearly always at the CSP Blog). The period is therefore $600$ samples, but the signal is also periodic with period $1200$ samples, $1800$ samples, $k600$ samples for all integers $k >0$.

We apply the synchronized averager to $65536$ samples of the sine-wave data record, and we store the power in the error signal as well as the obtained period for the value of $T_p$ that achieves the minimum error power. Here is the plot of the error power versus candidate $T_p$:

Notice that the error power (or “residual” power) reaches a minimum of $1.0$ at $T_p = 600$. This indicates that the full power of the signal component is removed by the periodized waveform using the $T_p = 600$ period because we know the noise power is unity. In this way, the periodicity of the input is blindly estimated.

Things get more complex when the input contains a sophisticated periodic component, made up of many different (Fourier-series) sine-wave components, but the basic idea is the same: look for a minimum in the error-power function to estimate the period.

So now let’s apply the synchronized averaging operation to the noisy BPSK1 signal. To find the various $n$th-order temporal cumulant functions, we can first find the various-order temporal moment functions using synchronized averaging and then combine them using the moment-to-cumulant formula (4). For the synchronized-averaging results here, I always take the delays $\tau_j$ to be zero. This is fine for most signal types. An exception is those PSK/QAM signals that employ rectangular pulses. (See the gallery of cyclic correlations for where the cyclic-autocorrelation magnitudes peak as functions of the delay $\tau$)

First up is $(n,m) = (2, 0)$:

For this input, what is the period? It’s not quite as simple as in the sine-wave illustration above. For $(n,m) = (2,0)$, the BPSK1 cycle frequencies are $2f_c + k/T_0 = 0.02 + 0.1k$ for $k \in \{-1, 0, 1\}$. That set of numbers turns out to be $\{-0.08, 0.02, 0.12\}$, which correspond to periodicities with periods $\{12.5, 50, 8.3333\}$. Each of these periods divides the candidate period $T_p = 50$. The synchronized averager shows a nice local minimum at $T_p =50$, and of course there are other local minima at multiples of $50$.  The algorithm picks $T_p = 500$, and produces the following period:

Now, the plot of the period so obtained is the actual time-varying moment function. We almost never directly plot this function, preferring to focus on its Fourier-series components, the cyclic moments. Taking the discrete Fourier transform leads to the cyclic moments for $(n,m) = (2, 0)$:

We can continue this kind of analysis, finding the time-varying temporal moment functions for all needed $(n,m)$, and then simply combining them according to the moment-to-cumulant formula. The result is a blind estimate of the temporal cumulant function, and the cyclic cumulants are obtained by Fourier transforming the temporal cumulant and then picking peaks with high local SNR. It is important to note that the plots of the CTMFs are for our edification–they show agreement between theory and estimate–but we don’t need to do that Fourier analysis until we obtain a temporal cumulant if we are looking for pure cycle frequencies.

Figures 15 through 22 show temporal moment functions and their Fourier components for other choices of $(n,m)$. Comparison with directly estimated cyclic moment functions confirms the efficacy of synchronized averaging.

The cyclic cumulants obtained through synchronized averaging, the moment-to-cumulant formula, and Fourier analysis are shown in Figures 23-35, along with directly estimated cyclic cumulants (using the cyclic-moment-to-cyclic-cumulant formula (5)).

We conclude that the synchronized averaging approach to blind estimation of cyclic cumulants can work, provided the final temporal cumulant function can be analyzed into its significant Fourier components through some thresholding operation.

Now let’s apply this machinery to the data that consists of BPSK1+BPSK2+noise. Recall that BPSK1 has unit power and BPSK2 has power level $1.6$, or $2$ dB. Also, the bit rate for BPSK1 is $1/10$ and that for BPSK2 is $1/11$. The carrier offset for BPSK1 is $0.01$ and that for BPSK2 is $0.002$.

And the cyclic cumulants:

### Partially Blind Estimation of Temporal Cumulants

This post is already overly long; thanks to those that have made it to the end. I’m running out of steam, but we have the lingering question of “What to do about partially blind estimation of cyclic cumulants?” A typical example is the case of a digital QAM signal in noise. The symbol rate is known, but the carrier offset and modulation type (exact constellation) are not. For such signals, the carrier frequency is needed for specification of cycle frequencies starting with order $n=4$.

We can definitely apply synchronized averaging to the $(n,m) = (4,0)$ delay product and look for the peak in the Fourier transform of the result. We can also realize that we can simply look at the Fourier transform of the lag product itself. The peak will occur for a cycle frequency equal to the quadrupled carrier. Other signal types may have to be handled in ways that are specific to their cycle-frequency structure;  the synchronized averaging method is universally applicable.

As always, please note in the comments any questions, errors, or suggestions for improvement related to this post.

I'm a signal processing researcher specializing in cyclostationary signal processing (CSP) for communication signals. I hope to use this blog to help others with their cyclo-projects and to learn more about how CSP is being used and extended worldwide.

## 23 thoughts on “CSP Estimators: Cyclic Temporal Moments and Cumulants”

1. Ramesh Rao says:

I would like some help regarding estimation of fourth order cyclic cumulants. Can someone post the code for that. Much appreciated!

2. quyangdl says:

Hi, Dr Spooner

In the figure, “warped measured cc mags for 0-dB bpsk1 in BPSK1+BPSK2 DATA using all valid lower-order CFs”, the C(2,0,0) is 1, which is equal to the power of bpsk1.

But if we want the cc mags of data(bpsk1+bpsk2) together, the C(2,0,0), C(2,1,0) and C(2,1,1) will be same and be equal to the average power of the data itself. The second order cyclic moment CM(2,0,0) will be equal to E^{\alpha_{1}}[s^2]+E^{\alpha_{2}}[s^2] , where s=bpsk1+bpsk2, \alpha_{1} is cyclic frequency of bpsk1, and \alpha_{2} is cyclic frequency of bpsk2. Meanwhile, The second order cyclic moment is equal to the second order cyclic cumulant. In addition, the fourth order CCs will be same, such as C(4,0,0) = C(4,2,0) =C(4,1,0).

Am I right?

Thanks

Andy

1. Hey Andy. Thanks for looking at that estimation post.

In the figure, “warped measured cc mags for 0-dB bpsk1 in BPSK1+BPSK2 DATA using all valid lower-order CFs”, the C(2,0,0) is 1, which is equal to the power of bpsk1.

Yes.

But if we want the cc mags of data(bpsk1+bpsk2) together, the C(2,0,0), C(2,1,0) and C(2,1,1) will be same and be equal to the average power of the data itself.

No. Perhaps my notation is confusing you here. C(n, m, k) means the cyclic cumulant of order n, using m conjugations, and harmonic number k. This applies to QAM/PSK/PAM/CPM/OQPSK/MSK etc. for which the cycle frequencies conform to the formula

alpha(n,m,k) = (n-2m)fc + k/T0

where fc is the carrier offset and 1/T0 is the symbol rate.

Now, the two signals, BPSK1 and BPSK2, have different carrier offsets and different symbol rates. So if we say “C(2, 0, 0)” we have to say which signal this is relative to–what are fc and T0 for this cyclic cumulant? For the BPSK1 + BPSK2 data, C(2, 0, 0) for BPSK1 corresponds to the cycle frequency alpha = 2fc_1 + 0/T0_1 = 2fc_1 = 2(0.01) = 0.02.

Cyclic cumulant C(2, 1, 0) is a special case because it doesn’t depend on the carrier offsets or the symbol rates, it corresponds to alpha = 0 = (2 – 2(1))fc + 0/T0 = 0. So, C(2, 1, 0) is in fact the average power of the data (= P1 + P2 + PNoise). C(2, 0, 0) for BPSK1 will be the doubled-carrier cyclic cumulant for BPSK1 as if BPSK2 was not present. C(2, 0, 0) for BPSK2 will be the doubled-carrier cyclic cumulant for BPSK2 as if BPSK1 was not present.

The second order cyclic moment CM(2,0,0) will be equal to E^{\alpha_{1}}[s^2]+E^{\alpha_{2}}[s^2] , where s=bpsk1+bpsk2, \alpha_{1} is cyclic frequency of bpsk1, and \alpha_{2} is cyclic frequency of bpsk2.

I think you’re mixing up some notation and concepts here. The notation E^alpha[x(t)] means the sine-wave extraction operator applied to x(t). If y = E^alpha[x(t)], then if x(t) is cyclostationary, y is a function of time, y(t). It extracts all the periodic components of its
input x(t). On the other hand CM(2, 0, 0) is a cyclic moment, with order 2, no conjugations, and harmonic of 0. It is a number, not a function of time (although it is a function of delay tau, which is suppressed here because I usually consider tau = 0).

Hope that helps.

3. Charan says:

Hi, Dr. Spooner

I have been trying to do the estimation of second order temporal moment the same way as you explained in the blind estimation part of this post with the BPSK you have given in the downloads section of this blog “MATLAB script for rectangular-pulse BPSK example” (I changed fc to 0.01 for getting the same results as in this post). But, while calculating Synchronous averaging and plotting the power difference between original Vs the extracted sine wave, I could not get the little dips at odd multiples of 25, which were obtained by you as shown in this figure linked below

https://cyclostationary.files.wordpress.com/2017/09/bpsk_2_0_tmf_residual.jpg

I have written the Synchronous averaging code myself and also used the built-in “tsa” function in MATLAB. Nonetheless, I was not getting the small dips when I considered Tp = 500 or 50. And hence I was getting only one peak in the cyclic moment of (2,0) at 0.02. I have taken \tau as zero and just tried to calculate R_x(2,0) = TSA[(y_of_t).*(y_of_t)]. Can you help me rectify the wrong I have done in the calculation.

Thanks,
Charan

1. Thanks for the comment Charan, and for studying that post on estimating cyclic moments and cumulants.

I didn’t know about MATLAB’s tsa, so thanks for that tip!

The good news is that I don’t think you’ve done anything wrong. You won’t be able to replicate my results unless you use the same kind of signal. Recall that I used BPSK with square-root raised-cosine pulses ($r=1$) in that post. You used my code that creates a rectangular-pulse BPSK signal.

So should the result be the same? No, because when you look at squaring the rectangular-pulse BPSK signal (and “squaring” is implied by $(n,m) = (2,0)$), you get a single sine-wave at the squarer output. It has frequency equal to the doubled carrier frequency which is $0.02$. But when you square the square-root raised-cosine BPSK signal, you get the doubled carrier plus two other sine-wave components. What about other quadratic nonlinearities? What if we don’t restrict the delays in the second-order lag product to zero?

I redid the synchronized averaging using a rectangular-pulse BPSK signal with the same parameters as you mention in your comment above. Here is the result (which should match what you’ve been seeing) for delays of zero in the lag product:

We see only residual-power dips for periods corresponding to harmonics of the doubled carrier ($k*(1/0.02) = k*50$).

Then I set the delay of one of the factors in the lag product to $5$ and kept the other at $0$. The synchronized averaging result is:

See if you can replicate.

And try to understand why the smaller dips happen at multiples of $25$ samples. Hint: What are the other $(2,0)$ cycle frequencies for this signal?

4. Hi, Dr.Spooner

I followed your instruction. I could got the synchronized averaging result of sine wave. However, I could not get the the synchronized averaging results of BPSK sine, which are shown after “First up is (n,m) = (2, 0):”

My procedure is as following,
1) Generate a BPSK signal x(t).
2) Set a Tp vector.
3) According to different Tp, select different length (Tp length) signals s(t) from x(t).
4) perform average operation for different s(t).^2. r(t) = 1/N(\sum s(t).^2 )
5) combine multiple r(t) into an new signal f(t), which has same length as x(t).
6) compare the power difference between f(t) and x(t). Where f(t) is already in power format.

Did I miss anything?

Thanks.

Best,

Andy

1. Your description is a bit vague, so there could be errors lurking in places I can’t see yet… But I would not do Steps 3 and 4 the way you’re doing them. I would apply the exact same code I had created for the case of a sine-wave input to every other signal. In other words, take your BPSK signal s(t), square it, then use that as the input to your synchronized averager.

Step 6 is not clear to me here. You want to find the variance of the difference between f(t) and x(t). But I don’t understand what “f(t) is already in power format” means.

Overall, if you want to leave a more detailed comment, I believe we will be able to find the problem(s).

1. Andy says:

Hi, Dr.Spooner

Thanks for you help. I got the result. However, I am not quite understand that if we want to show up the symbol duration, why we need time delay tau in the product function x(t)x(t+tau)? I read some of your papers, everywhere has time delay tau, such as spectral correction function, cyclic cumulant function and so on.

How could we realize the delay tau? When we could set the tau is equal to 0?

Thanks

1. Glad you got a good result for BPSK!

Well, for the square-root raised-cosine pulses used by BPSK1 and BPSK2 here in this post, we don’t need any non-zero delays $\tau_j$. So in all those results I posted, the delay vector was all zeros. I’ve updated the post to make that explicit.

If you did want to use non-zero delays, you can approximate the delay in MATLAB using circshift.m. By including leading and trailing zeros, and then applying circshift.m, you can also avoid the edge effects implied by circularly shifting. If the delays are small relative to the length of the data block to be processed, the circular shifting isn’t a big error factor.

Yes, I do typically use a delay vector of zero in cyclic moments and cumulants, but not in the spectral correlation function. When we compute an estimate of the spectral correlation function for a particular value of cycle frequency $\alpha$, we can inverse transform that estimate (over frequency $f$) to obtain the cyclic autocorrelation for a large number of delays. So, in effect, the spectral correlation function includes many values of delay. Similar statements apply for the cyclic polyspectrum.

1. Andy says:

Hi, Dr.Spooner

For the synchronized averaging method, If we set the normalized carrier frequency f, whose reciprocal is not an integer. In this case, we could not set the period T_p as a float number in MATLAB. The T_p will not be exactly equal to the 1/f.

If use integer T_p vector, I got different size dips in T_p, 2T_p, 3T_p… If we choose a carrier frequency f value, let 1/f is an integer, I will get multiple same dips at 1/f, 2/f, 3/f ….

So, how could we deal with this case? Or do I miss anything ?

Thanks,

Best.

Andy

2. I don’t think you are missing anything. The different depths of the dips arises because the different periods $T_p, 2T_p, \ldots$ have different distances to the true periods $1/f, 2/f, \ldots$.

Oversampling will help. Also, you could fashion an iterative algorithm that finds the minima, estimates the carrier offset, shifts the data by that estimate, then repeats.

5. Hi, Dr.Spooner

I have a question for blind calculate CTCM of a BPSK/QPSK via synchronized averaging DFT.

Could I believe that synchronized averaging method is kind of perform general sine-wave extraction operator, like Equation (20) on the link https://cyclostationary.blog/2016/02/14/cyclic-temporal-cumulants/ ?

Assume that we have a QPSK signal S(t), If we get the correct Tp(actually carrier frequency), we can get its synchronized averaging result via Tp, which should be Fs/(4Fc）, and then we can get its CTCM result through taking a FFT operation for the .

I am right?

Thanks.

1. Hey Andy.

Your comment is garbled and difficult to read. You’ve crossed out the last few lines (perhaps accidentally).
Can you rewrite it and repost? Also, what do you mean by “CTCM”? I discuss the CTMF and the CTCF in various posts. These are the cyclic temporal moment function and cyclic temporal cumulant function, or cyclic moment and cyclic cumulant, respectively.

6. Hi, Dr.Spooner

I have a question for blind calculate cyclic moment of a BPSK/QPSK via synchronized averaging DFT.

Could I believe that synchronized averaging method is kind of perform general sine-wave extraction operator, like Equation (20) on the link https://cyclostationary.blog/2016/02/14/cyclic-temporal-cumulants/ ?

Assume that, we have a QPSK signal S(t), If we get the correct Tp(actually carrier frequency) via synchronized averaging method, we can get its synchronized averaging result according to the Tp, which should be Fs/(4Fc), and then we can get its cyclic moment after taking a FFT operation for the synchronized averaging result.

Am I right?

Thanks.
Best,

Andy

1. Could I believe that synchronized averaging method is kind of perform general sine-wave extraction operator, like Equation (20)

Yes, SA can do that for you, but you have to be careful because with DSP we have a sampled time-series, and so without resampling, you are constrained to check for candidate periods $T_p$ that are integer multiples of the sampling increment.

Assume that, we have a QPSK signal S(t), If we get the correct Tp(actually carrier frequency) via synchronized averaging method, we can get its synchronized averaging result according to the Tp, which should be Fs/(4Fc)

Well, for QPSK, if we look at the $(n, m) = (4, 0)$ lag product, we see that there are three cycle frequencies: $\alpha = 4f_c, \alpha = 4f_c+f_{sym},$ and $\alpha = 4f_c - f_{sym}$ (for SRRC QPSK). If those three cycle frequencies are commensurate, they will all correspond to some harmonic of a common period that might be found using synchronized averaging. Otherwise, depending on your sampling rate, you might obtain a candidate period from synchronized averaging that corresponds to just one of them.

and then we can get its cyclic moment after taking a FFT operation for the synchronized averaging result.

If the synchronized averaging works out and we’ve identified the fundamental period for the periodic temporal variation of the $(4,0)$ lag product, then yes, if you create a signal by periodizing the obtained period, you can analyze its sine-wave components using the Fourier transform.

7. Hi, Dr.Spooner

Does synchronized averaging method has better frequency resolution than FFT?

It seems that synchronized averaging method require more data to get good frequency resolution, which enable us to find the multiple peaks(if more signals). However, if we use FFT with longer FFT size, we still can get good frequency resolution.

So, what is the big difference/ benefit between synchronized averaging method and FFT?

Thanks,

Best,
Andy

1. In my context of CSP, synchronized averaging is a time-domain process. Its benefit is that it can estimate the period of a periodic signal component even when the period waveform is highly complex. In that case, the periodic signal has a rich harmonic structure–sometimes no single element is particularly strong. Fourier methods can then have a hard time detecting the various harmonics. So Fourier analysis is particularly good for finding strong sine-wave components in a data set, and synchronized averaging is good for finding periods of periodic signals that do not possess a single dominant sine-wave component.

If we want to talk about frequency resolution, we can talk about estimating the fundamental frequency of a periodic waveform with synchronized averaging. Suppose the period is $T_p$ and we are using a sampling rate of one. So we might estimate the period as $\hat{T}_p = T_p-1$, $T_p$, or $T_p + 1$ when things are going well. The corresponding frequency estimate is $1/\hat{T}_p$.

The error in the frequency estimate is

$\displaystyle e = \left[ \frac{1}{T_p} - \frac{1}{\hat{T}_p} \right]$

or

$\displaystyle e = \frac{|\hat{T}_p - T_p |}{T_p \hat{T}_p}$

Supposing the estimate is good, we have

$\displaystyle e = \frac{1}{T_p\hat{T}_p} \approx \frac{1}{T_p^2}$

The normalized error is

$\displaystyle e_n = \frac{e}{1/T_p} = 1/T_p$.

So the resolution depends on the period length $T_p$. This is why oversampling helps in synchronized averaging.

8. Hi, Dr. Spooner

The high order cyclic cumulant value is sensitivity with signal power. Is any way to remove the power information from the cyclic cumulant?

Thanks

1. Strictly speaking, no. Unless you multiply the data by zero, then all the cyclic cumulants are zero, and there is no sensitivity to the power.

But maybe you mean normalize the cyclic cumulant or data so that the cyclic cumulant corresponds to a signal power of, say, one, no matter what the original power was. In that case, you can normalize the data by the square root of the signal power, if you know it. If you don’t know it, you can estimate it. If you don’t want to do that, and you know the modulation type, you can compute the cyclic cumulants for the original data, then determine the signal power by comparing, say, $C_x^0(0; 2, 1)$ with the corresponding cumulant from the known unit-power modulated signal. That will give you an estimate of the signal power and you can normalize after that.

If you don’t know the modulation type, or if the signal is cochannel with other signals, you’ll have to extend these ideas.

9. Andy says:

Hi, Dr. Spooner

In the figure, the cyclic cumulant result of BPSK1 +BPSK2, do you assume that two signals are coming from same channel? and they have same carrier phase shift? But if in real-time system, BPSK1 and BPSK2 may come from different/same channel with different phase shift. In this case, the cyclic cumulant result of BPSK1+BPSK2 will not be unique. The result will be changed by two random phase shift. Please correct me if I am wrong.

So, If I want to classify BPSK1+BPSK2 with other communication signal, I have to use different order cumulant instead of cyclic cumulant, right? Becasue cyclic cumulant is sensitive to phase shift, in real-time transmission, we cannot assume two signals have same phase shift. However, cumulant is statistical value, which may not sensitive to phase shift.

Thanks,
Best,
Andy

1. In the figure, the cyclic cumulant result of BPSK1 +BPSK2, do you assume that two signals are coming from same channel?

No. I do not assume that they propagate through the same physical channel, nor do I assume that they are emitted from the same antenna.

and they have same carrier phase shift?

No. I do not assume they have the same carrier phase.

But if in real-time system, BPSK1 and BPSK2 may come from different/same channel with different phase shift. In this case, the cyclic cumulant result of BPSK1+BPSK2 will not be unique. The result will be changed by two random phase shift. Please correct me if I am wrong.

The plots are associated with the case in which the two BPSK signals have slightly different symbol rates and carrier frequency offsets. Their relative delay (symbol-clock phase) and carrier phase do not matter to the displayed results, where I am only showing the magnitudes of the estimated cyclic cumulants. If the symbol-clock phases and/or carrier phases were changed, the phases of the complex-valued cyclic cumulants would also change. But not the magnitudes. The estimation processes do not require knowledge of the phases; they are carried along in the various estimates of the complex-valued cyclic moments and are carried along in the synchronized-averaging process.

If I want to classify BPSK1+BPSK2 with other communication signal, I have to use different order cumulant instead of cyclic cumulant, right?

I don’t understand the grammar of “classify BPSK1+BPSK2 with other communication signal”. But you can successfully process the BPSK1 plus BPSK2 signals I use in this post using cyclic cumulants because the signals are statistically independent, and they have distinct modulation parameters (symbol rate and carrier offset). The phases do matter more, however, in the special case in which BPSK1 and BPSK2 have identical symbol rates and carrier frequencies, but that is a topic I’ve not yet introduced on the CSP Blog.

Becasue cyclic cumulant is sensitive to phase shift, in real-time transmission, we cannot assume two signals have same phase shift. However, cumulant is statistical value, which may not sensitive to phase shift.

No need to assume anything about the phases provided the two signals have distinct modulation parameters, in which case almost all the cyclic cumulants for each signal can be extracted from the data that consists of their noisy sum. The cyclic cumulants that cannot be extracted for each signal individually are those that correspond to $(n, m) = (n, n/2)$ and $\alpha = 0$, as noted in the post. But those are just a few cyclic cumulants relative to all those that make up the full set of cyclic cumulants.

I think your ideas about this might be more focused if you had a particular modulation classification approach or algorithm in mind. Then you can check for yourself whether the features for one signal can be extracted in the presence of the other signal.

1. Andy says: