Resolution in Time, Frequency, and Cycle Frequency for CSP Estimators

Unlike conventional spectrum analysis for stationary signals, CSP has three kinds of resolutions that must be considered in all CSP applications, not just two.

In this post, we look at the ability of various CSP estimators to distinguish cycle frequencies, temporal changes in cyclostationarity, and spectral features. These abilities are quantified by the resolution properties of CSP estimators.

Resolution Parameters in CSP: Preview

Consider performing some CSP estimation task, such as using the frequency-smoothing method, time-smoothing method, or strip spectral correlation analyzer method of estimating the spectral correlation function. The estimate employs T seconds of data.

Then the temporal resolution \Delta t of the estimate is approximately T, the cycle-frequency resolution \Delta \alpha is about 1/T, and the spectral resolution \Delta f depends strongly on the particular estimator and its parameters. The resolution product \Delta f \Delta t was discussed in this post. The fundamental result for the resolution product is that it must be very much larger than unity in order to obtain an SCF estimate with low variance.

What do we usually mean by spectral resolution as the term is used in conventional spectrum analysis? It sums up the ability of the estimator to display distinct responses to data components that have distinct spectra. That idea is somewhat abstract, so spectral resolution is often described more concretely as the ability of the estimator to reveal the presence of two closely spaced spectral components from, typically, two equal-strength sine waves at the estimator input.

Spectral Resolution in the Periodogram

The estimators I’m interested in are variants on the two basic non-parametric spectrum estimators that apply either time smoothing (temporal averaging) or frequency smoothing (spectral averaging) to the periodogram. The former is called the time-smoothing method and the latter the frequency-smoothing method of spectrum estimation, although there are the alternate names involving Proper Nouns that some people prefer: the Bartlett and Danielle methods.

Both methods take as the starting point a periodogram. In the case of the time-smoothing method, multiple periodograms are computed and averaged over time, so that the spectral resolution of the result must be the spectral resolution of the involved periodograms. In the case of the frequency-smoothing method, a single periodogram is convolved with a pulse-like smoothing kernel, so that the spectral resolution cannot be smaller than the native spectral resolution of the involved periodogram, and is equal to the width of the pulse-like kernel. So we must start with understanding the spectral resolution properties of the periodogram.

Let’s review the periodogram. First, we define the sliding (over time) finite-time Fourier transform as

\displaystyle X_T(t, f) = \int_{t-T/2}^{t+T/2} x(u) e^{-i2 \pi f u}\, du. \hfill (1)

The periodogram is the scaled squared magnitude of this quantity:

\displaystyle I_T(t, f) = \frac{1}{T} \left| X_T(t, f) \right|^2. \hfill (2)

What is the frequency resolution of this estimator? Since it integrates over a total of T seconds, you might guess it is approximately 1/T. Let’s find out.

Consider a noiseless input x(t) that consists of a single complex-valued sine wave with frequency f_0, amplitude A_0, and phase \phi_0:

\displaystyle x(t) = A_0 e^{i 2 \pi f_0 t + i \phi_0}, \hfill (3)

What is the periodogram for this input? Let’s first calculate the sliding Fourier transform X_T (t, f),

\displaystyle X_T(t, f) = \int_{t-T/2}^{t+T/2} (A_0 e^{i 2 \pi f_0 t + i \phi_0}) e^{-i 2 \pi f u} \, du \hfill (4)

\displaystyle = A_0 e^{i \phi_0} \int_{t-T/2}^{t+T/2} e^{-i 2 \pi (f - f_0) u} \, du \hfill (5)

If f - f_0 = 0 then X_T(t, f) = A_0 e^{i \phi_0} T. Otherwise, we evaluate the integral

\displaystyle X_T(t,f) = A_0 e^{i\phi_0} \left. \frac{e^{-i 2 \pi (f-f_0)u}}{-i2\pi (f-f_0)} \right|_{u=t-T/2}^{t+T/2} \hfill (6)

\displaystyle = \frac{A_0 e^{i\phi_0}}{-i2\pi (f-f_0)} \left[ e^{-i2\pi(f-f_0)(t+T/2)} - e^{-i2\pi(f-f_0)(t-T/2)} \right] \hfill (7)

\displaystyle = \frac{A_0 e^{i\phi_0}}{-i2\pi (f-f_0)} e^{-i 2 \pi (f-f_0)t} \left[ e^{-i2\pi(f-f_0)T/2} - e^{i2\pi(f-f_0)T/2} \right] \hfill (8)

Now, we can use Euler’s formula to simplify the term in the square brackets,

\displaystyle e^{-ix} - e^{ix} = (\cos(x) -i\sin(x)) - (\cos(x) + i\sin(x)) = -2i\sin(x) \hfill (9)

Application of (9) in (8) leads to

\displaystyle X_T(t,f) = \frac{A_0 e^{i\phi_0}}{-i2\pi (f-f_0)} e^{-i 2 \pi (f-f_0)t} \sin(2\pi[f-f_0]T/2) \hfill (10)

\displaystyle = TA_0 e^{i\phi_0} e^{-i 2 \pi (f-f_0)t} \left[ \frac{\sin(\pi[f-f_0]T)}{\pi (f-f_0)T} \right] \hfill (11)

We recognize the function in the square brackets as the sinc function, which is defined by

\displaystyle {\rm sinc}(x) = \frac{\sin(\pi x)}{\pi x} \hfill (12)

so that our time-varying Fourier transform is given by

\displaystyle X_T(t,f) = TA_0e^{i \phi_0} e^{-i 2 \pi (f-f_0)t} {\rm \ sinc}([f-f_0]T) \hfill (13)

For f = f_0, {\rm sinc}([f-f_0]T) = {\rm sinc}(0) = 1, so the result just above (6) is consistent with (13), meaning (13) is the correct result for all f, including f= f_0. The magnitude of this Fourier transform is plotted in the following figure:

Figure 1.

The figure indicates that the transform peaks at the correct frequency, but that the transform is also non-zero for a great many other frequencies–frequencies that are not present in the tone itself. This frequency response arises due to the finite-time window of the transform, because we know that the Fourier transform (which is taken over all time) for the noiseless tone is an impulse function centered at f = 0.05.

Now, the periodogram is given by

\displaystyle I_T(t, f) = \frac{1}{T} \left| X_T(t, f) \right|^2 \hfill (14)

so that the periodogram for our noiseless tone is simply

\displaystyle I_T(t,f) = \frac{1}{T} A_0^2 T^2 {\rm \ sinc}^2([f-f_0]T) \hfill (15)

\displaystyle = A_0^2 T {\rm \ sinc}^2 ([f-f_0]T) \hfill (16)

This periodogram is plotted in the following figure:

Figure 2.

We see that the width of the main lobe of this function is 2/T.

Now, let’s get back to talking about spectral resolution. What is the periodogram for the case of a signal consisting of two tones? The signal model is

\displaystyle x(u) = A_0e^{i2\pi f_0 u + i\phi_0} + A_1e^{i2\pi f_1 u + i\phi_1} \hfill (17)

The Fourier integral is linear, so we can immediately generalize our previous result for X_T(t, f) to

\displaystyle X_T(t, f) = \sum_{j=0}^1 A_j T e^{i\phi_j} e^{-i2\pi (f-f_j)t} {\rm \ sinc}([f-f_j]T) \hfill (18)

and we can use this expression for X_T in the definition of the periodogram to obtain

\displaystyle I_T(t,f) = \frac{1}{T} \left| X_T(t,f) \right|^2

\displaystyle = T \sum_{j=0}^1 A_j^2 {\rm \ sinc}^2([f-f_j]T) + 2 \Re \left[ A_0A_1Te^{i(\phi_0 - \phi_1)} e^{-i2\pi(f_1 - f_0)t} {\rm\ sinc}([f-f_0]T) {\rm\ sinc}([f-f_1]T) \right] \hfill (19)

We see there is a sinc function centered at f_0 and another one centered at f_1. But there is also a term representing a mixture of the two sinc functions. The question becomes: Under what conditions are the sinc-peaks for the two frequencies f_0 and f_1 separable (distinguishable)?

In the case of |f_0 - f_1| > 2/T, the centers of the two squared sinc functions are separated by more than the width of the main lobe of each one, and so should be distinguishable. On the other hand, when |f_0 - f_1| \ll 2/T, the centers of the two squared sincs are close relative to their widths, and there is no hope of distinguishing their individual contributions.

What about inbetween?

The squared-sinc terms represent the responses to each tone as if it were the only tone present, and do not depend on time t. The third term is a mixture of responses to the two tones, and depends on t. The worst case, in the context of spectral resolution, is when e^{i(\phi_0 - \phi_1)} = 1 and e^{-i2\pi (f_1-f_0)t} = 1, so that the interference term becomes

\displaystyle 2A_0 A_1 T {\rm \ sinc}([f-f_0]T) {\rm \ sinc} ([f-f_1]T). \hfill (20)

The best case is when e^{i(\phi_0 - \phi_1)} and e^{-i2\pi (f_1 - f_0)t} are purely imaginary, in which case the term is zero and we’re left with the sum of the two individual responses.

We can numerically evaluate these two extremes and plot the results to determine, approximately, the resolvability of the two tones as a function of their difference and the observation interval length T. In the following, f_0 = 0.2 and f_1 = 0.202, so that the difference between the two frequencies is 0.002 \approx 1/512, which is important because we consider T = 256, 512, 1024, \ldots in our periodogram computations.

First, the best-case version of the periodogram formula (19):

Figure 3.

Here we see that two distinct peaks in the periodogram are visible for T \ge 512, which is about the reciprocal of the difference between the two frequencies, indicating that the best-case periodogram for T seconds can resolve tones with frequency separation of about 1/T. For T smaller than 512, we see only a single bump in the graph–no way to tell there are two tones present so they are unresolved.

The worst-case version of formula (19) is computed and graphed in the following figure:

Figure 4.

Here the two tones show two distinct peaks for T \ge 1024 rather than T \ge 512, indicating that the resolution of the worst-case periodogram is a little worse than that for the best case. Quelle surprise.

Summing up, the resolution of the periodogram is about 1/T, but might be as bad as about 2/T.

Spectral Resolution in Non-Parametric Spectrum Estimators

Now that we know something about the spectral resolution of the periodogram, and since we know how the periodogram is related to important spectrum estimators, we can discuss the spectral resolution of these estimators.

For the frequency-smoothing method of PSD estimation, the periodogram is convolved with a smoothing function g_{\Delta f}(f), which has unit area and width \Delta f. If \Delta f is greater than the native periodogram resolution 1/T, then the resolution of the PSD estimator will be approximately \Delta f.

For the time-smoothing method of PSD estimation, multiple short-time periodograms are averaged over time. If M blocks of data, each having duration T_{tsm} are averaged, then the spectral resolution is the spectral resolution of the periodograms, or about 1/T_{tsm}. We can force the frequency-smoothing and time-smoothing methods to produce similar estimates by using T = M T_{tsm} in the TSM and \Delta f \approx 1/T_{tsm} in the FSM.

Spectral Resolution in the Cyclic Periodogram

It is probably intuitive for my readers to see that the spectral resolution of the cyclic periodogram is about equal to the spectral resolution of the periodogram. In fact, the (non-conjugate) cyclic periodogram is identically equal to the periodogram for \alpha = 0, in which case the resolution properties match exactly.

Spectral Resolution in Non-Parametric Spectral Correlation Estimators

Like the spectral estimators, the spectral correlation estimators will have spectral resolution that is determined by the averaging that is applied to the cyclic periodogram. For the frequency-smoothing method, it will be approximately the width of the smoothing kernel function, and for the time-smoothing method, it will be the reciprocal of the periodogram length T_{tsm}.

But let’s verify and illustrate these claims before leaving the topic of spectral resolution and moving to temporal and cycle-frequency resolution. Here it is harder to illustrate the situation using tone inputs (why?), so we look at a signal that has significant variation over frequency for both its spectrum and its spectral correlation function: the direct-sequence spread-spectrum BPSK signal. Let’s consider a DSSS BPSK signal with chip rate 1/3 (normalized frequencies are used, which is typical on the CSP Blog), processing gain (code length) of 63, and a square-root raised-cosine pulse function. The data rate is (1/3)/63.

We start with estimates of the PSD, the chip-rate non-conjugate SCF, and the data-rate non-conjugate SCF using a very long version of the signal and a very small smoothing-kernel width in the FSM. This produces a set of reference (high-fidelity) estimates useful for later comparison with other estimates. Here are the reference estimates:

Figure 5.

Note here that the three SCF estimates have several very narrow spectral features (nulls). Unlike the case of the periodogram, where we looked at the ability to distinguish two peaks in the spectrum, here we look at the ability to accurately estimate a narrow valley. Either way, we are trying to resolve something.

First, let’s look at a sequence of FSM estimates, where we vary the width of the smoothing kernel \Delta f:

Figure 6.

By studying these plots, you can see that the basic claim is verified: The spectral resolution is approximately equal to the width of the smoothing kernel. That is probably unsurprising and somewhat obvious to most readers, who are likely quite familiar with convolution and its smearing/smoothing effects. Let’s repeat this experiment, then, with the TSM, which employs no convolution at all:

Figure 7.

Temporal Resolution in Non-Parametric CSP Estimators

The temporal resolution \Delta t quantifies the ability of the estimate to distinguish statistically distinct probabilistic parameters over time. In other words, if the signal changes its character over time, can the estimator reflect that?

Temporal resolution is a little hard to talk about in the context of our usual definitions of parameters such as the cyclic autocorrelation or spectral correlation function, because those definitions involve averaging over an infinite amount of time. If we consider finite-time estimates, and also consider a sequence of such estimates arising from processing a sequence of non-overlapping adjacent contiguous data blocks, then we can consider the ability of the sequence of estimates to resolve any changes in the parameter over time.

From this point of view, it seems clear that the temporal resolution \Delta t is simply the block length used in creating each estimate in the sequence of estimates. Temporal variations happening faster than a block length will tend to be obscured by the averaging over the block’s samples, whereas temporal variations happening slower than a block length have the chance to be resolved by comparing the algorithm outputs for the different blocks.

Cycle-Frequency Resolution in the Context of the Cyclic Autocorrelation Function

Now let’s turn to resolution in cycle frequency. In particular, let’s first look at the cyclic autocorrelation function. I’ll be focusing on the non-conjugate CAF, but all of what follows is applicable to the conjugate CAF too.

Recall that the non-conjugate CAF is defined as a Fourier coefficient in the Fourier series expansion of the time-variant autocorrelation function. It is also equal to the following limit

\displaystyle R_x^\alpha (\tau) = \lim_{T\rightarrow\infty} \frac{1}{T} \int_{-T/2}^{T/2} x(t+\tau/2)x^*(t-\tau/2)e^{-i2\pi\alpha t} \, dt \hfill (21)

A natural and effective estimator is simply the finite-time version of this idealized average.

The lag product x(t+\tau/2)x^*(t-\tau/2) must have a representation of the form

\displaystyle x(t+\tau/2)x^*(t-\tau/2) = R_x^\alpha(\tau)e^{i2\pi\alpha t} + e(t, \tau, \alpha), \hfill (22)


\displaystyle \lim_{T\rightarrow\infty} \frac{1}{T} \int_{-T/2}^{T/2} e(t, \tau, \alpha) e^{-i2\pi\alpha t} \, dt = 0. \hfill (23)

But this kind of representation must also be true for each cycle frequency \alpha corresponding to a non-zero cyclic autocorrelation function (21). Therefore the lag product must have a representation of the form

\displaystyle x(t+\tau/2)x^*(t-\tau/2) = R_x(t, \tau) + E(t, \tau) \hfill (24)


\displaystyle R_x(t,\tau) = \sum_\alpha R_x^\alpha(\tau) e^{i2\pi\alpha t} \hfill (25)


\displaystyle \lim_{T\rightarrow\infty} \frac{1}{T} \int_{-T/2}^{T/2} E(t, \tau) e^{-i2\pi\alpha t} \, dt = 0. \hfill (26)

So an estimate of the cyclic autocorrelation function for cycle frequency \alpha using T seconds of data is

\displaystyle \hat{R}_x^\alpha(\tau) = \frac{1}{T} \int_{-T/2}^{T/2} \left[ R_x(t, \tau) + E(t, \tau) \right] e^{-i2\pi\alpha t} \, dt \hfill (27)

Neglecting E(t, \tau) (because we’re interested in resolution not reliability here) by, say, assuming T is large, we have

\displaystyle \hat{R}_x^\alpha(\tau) \approx \frac{1}{T} \int_{-T/2}^{T/2} \left( \sum_\beta R_x^\beta(\tau) e^{i2\pi\beta t} \right) e^{-i2\pi\alpha t} \, dt \hfill (28)

\displaystyle = \frac{1}{T} \sum_\beta R_x^\beta(\tau) \int_{-T/2}^{T/2} e^{-i2\pi(\alpha - \beta)t} \, dt. \hfill (29)

If \alpha = \beta, the integral is equal to T, which is satisfying. Otherwise, let’s evaluate the integral. We have

\displaystyle \int_{-T/2}^{T/2} e^{-i 2\pi (\alpha - \beta)} \, dt = \left. \frac{e^{-i2\pi(\alpha-\beta)t}}{-i2\pi(\alpha-\beta)} \right|_{t=-T/2}^{T/2}

\displaystyle = \left( \frac{1}{-i2\pi(\alpha-\beta)} \right) \left[ e^{-i2\pi(\alpha-\beta)T/2} - e^{i2\pi(\alpha-\beta)T/2} \right]

\displaystyle = \frac{\sin(\pi[\alpha-\beta]T)}{\pi (\alpha-\beta)} = T{\rm sinc}([\alpha-\beta]T). \hfill (30)

Turning back to the CAF estimate, we now have

\displaystyle \hat{R}_x^\alpha(\tau) \approx R_x^\alpha(\tau) + \sum_{\beta, \beta\neq\alpha} R_x^\beta(\tau) {\rm sinc}([\alpha-\beta]T) \hfill (31)

When |\alpha-\beta|T \gg 1, we have {\rm sinc}([\alpha-\beta]T) \ll 1. This means the contributions from the other cycle frequencies are small when

\displaystyle |\alpha - \beta| \gg \frac{1}{T}. \hfill (32)

So our first cycle-frequency resolution result is that individual components of the autocorrelation function can be distinguished from each other—resolved—provided that the minimum separation between cycle frequencies is greater than the reciprocal of the data-record length T used to make the CAF estimates.

In particular, the cycle frequency of zero is always a non-conjugate cycle frequency (because we are always considering the class of signals called power signals in CSP). To detect the presence of a small cycle frequency \alpha, we must process sufficient data to force the contribution from the nearby \beta = 0 term to be small. The approximate amount of sufficient data in this case is \frac{1}{|\alpha - 0|} = \frac{1}{\alpha}.

Let’s illustrate the CAF cycle-frequency resolution with an example involving two cochannel BPSK signals. Their bit rates are quite close: 0.1 and 0.1001. Their power levels are equal and their carrier frequencies are 0.002 and 0.0. Here is a PSD estimate for the observed sum of these two signals:

Figure 8.

First, let’s look at the CAF maxima for a fixed block length of T = 16384 samples. Here we estimate the CAF for a set of uniformly spaced cycle frequencies with separation \Delta A. The fixed value of T is sufficient to resolve the two bit rate cycle frequencies, since their separation 0.0001 is larger than 1/T = 6.1e-5. However, this cycle-frequency resolution must also be accompanied by a correspondingly fine grid of cycle frequencies (unless the bit rates are known in advance, but let’s pretend they aren’t here). Once the grid spacing \Delta A is approximately equal to the cycle-frequency resolution \Delta\alpha \approx 1/T, both cycle frequencies are seen.

Figure 9.

Next, consider variable T, but fix the grid spacing at the cycle-frequency resolution, \Delta A = \Delta\alpha = 1/T:

Figure 10.

In each plot, we see at least one prominent feature. For 1/T greater than the separation 0.0001 between the two cycle frequencies, the two features get lumped together. Once 1/T is sufficiently small, the two cycle frequencies are resolved.

Cycle-Frequency Resolution in the Context of the Spectral Correlation Function

Let’s consider the time-smoothing method of spectral correlation estimation in discrete time and discrete frequency.  The theoretical SCF can be obtained by first averaging over all time, and then increasing the periodogram block length without bound:

\displaystyle S_x^\alpha(f) = \lim_{Z\rightarrow\infty} \lim_{M\rightarrow\infty} \frac{1}{M} \sum_{m=0}^{M-1} I_Z^A(m,f) e^{-i 2 \pi \alpha m Z} \hfill (33)

where A = \lfloor \alpha Z \rfloor is the closest number of FFT bins to the desired cycle frequency \alpha and I_Z^A(m,f) is the cyclic periodogram.

\displaystyle I_Z^A(m,f) = \frac{1}{Z} X_Z(m,f+A/2) X_Z^*(m,f-A/2) \hfill (34)

(assume A is even for simplicity in this exposition).

Now, for the double limit in (33) to hold true, the cyclic periodogram must have a representation of the form

\displaystyle I_Z^A (m,f) = \left[d_Z(f,\alpha)\otimes S_x^\alpha(f) \right] e^{i2\pi \alpha m Z} + r(f, \alpha, M, m, Z) \hfill (35)

such that

\displaystyle \lim_{M\rightarrow\infty} \frac{1}{M} \sum_{m=0}^{M-1} r(f, \alpha, M, m, Z) e^{-i2\pi\alpha m Z} = 0 \hfill (36)


\displaystyle \lim_{Z\rightarrow\infty} d_Z(f, \alpha) = \delta(f). \hfill (37)

But consider two distinct cycle frequencies \alpha_1 and \alpha_2 that map to the same sequence of cyclic periodograms I_Z^A(m,f). This only requires that the difference between the two cycle frequencies is significantly smaller than the FFT bin width implied by Z.

The TSM spectral correlation function estimates for \alpha_1 and \alpha_2 converge to the proper SCFs, even though they use exactly the same sequence of cyclic periodograms because of the effective filtering performed by the multiplication by the complex sine wave e^{-i2\pi\alpha_{j} m Z} for j = 1, 2. So we have the representation

\displaystyle I_Z^A(m,f) = \sum_{k=1}^2 \left[ d_Z(f,\alpha_k) \otimes S_x^{\alpha_k}(f) \right] e^{i2 \pi \alpha_k m Z} + r(f, M, m, T) \hfill (38)


\displaystyle \lim_{M\rightarrow\infty} \frac{1}{M} \sum_{m=1}^{M-1} r(f,M,m, Z) e^{-i 2 \pi \alpha_k m Z} = 0. \hfill (39)

As a check, let’s verify that the double limit holds for \alpha_1 and \alpha_2.

\displaystyle W(\alpha_1) = \lim_{Z\rightarrow\infty} \lim_{M\rightarrow\infty} \frac{1}{M} \sum_{k=1}^2 \sum_{m=0}^{M-1} \left[ d_Z(f, \alpha_k) \otimes S_x^{\alpha_k}(f) \right] e^{i2\pi\alpha_k m Z} e^{-i2\pi\alpha_1 m Z} \hfill (40)

\displaystyle = \lim_{Z\rightarrow\infty} \lim_{M\rightarrow\infty} \frac{1}{M} \left( \sum_m \left[d_Z(f,\alpha_1) \otimes S_x^{\alpha_1}(f)\right] e^0 + \sum_m \left[d_Z(f,\alpha_2)\otimes S_x^{\alpha_2}(f)\right] e^{i2\pi(\alpha_2-\alpha_1)mZ} \right) \hfill (41)

\displaystyle = S_x^{\alpha_1}(f) + \lim_{Z\rightarrow\infty} \lim_{M\rightarrow\infty} \sum_{m=0}^{M-1} \frac{1}{M} \left[ d_Z(f,\alpha_2)\otimes S_x^{\alpha_2}(f)\right]e^{i2\pi(\alpha_2-\alpha_1)mZ} \hfill (42)

\displaystyle = S_x^{\alpha_1}(f) + \left(\lim_{Z\rightarrow\infty} \left[d_Z(f,\alpha_2)\otimes S_x^{\alpha_2}(f) \right] \lim_{M\rightarrow\infty}\frac{1}{M} \sum_{m=0}^{M-1} e^{i2\pi(\alpha_2-\alpha_1)mZ} \right) \hfill (43)

So, keeping in mind that \alpha_1 \neq \alpha_2, what is the function

\displaystyle \lim_{M\rightarrow\infty} \frac{1}{M} \sum_{m=0}^{M-1} e^{i 2 \pi (\alpha_2 - \alpha_1)mZ}

equal to? We recognize this as a geometric series of the form

\displaystyle \sum_{m=0}^{M-1} a^m

with a = e^{i2\pi(\alpha_2-\alpha_1)Z}. Such geometric series are easy to evaluate for finite M,

\displaystyle \sum_{m=0}^{M-1} a^m = \frac{1-a^M}{1-a} \hfill (44)

so that

\displaystyle \lim_{M\rightarrow\infty} \frac{1}{M} \sum_{m=0}^{M-1} e^{i 2 \pi (\alpha_2 - \alpha_1)mZ} = \lim_{M\rightarrow\infty} \frac{1}{M} \left( \frac{1-e^{i2\pi(\alpha_2-\alpha_1)MZ}}{1 - e^{i2\pi(\alpha_2-\alpha_1)Z}} \right) = 0. \hfill (45)


\displaystyle W(\alpha_1) = S_x^{\alpha_1}(f) \hfill (46)

and by a symmetry argument the same is true for \alpha_2.

Now consider the general case where N_\alpha distinct cycle frequencies \{\alpha_k\}_{k=1}^{N_\alpha} all map to the same sequence of cyclic periodograms in the TSM spectral correlation estimator. Our representation of the cyclic periodogram is then

\displaystyle I_Z^A (m,f) = \sum_{k=1}^{N_\alpha} \left[ d_Z(f,\alpha_k) \otimes S_x^{\alpha_k}(f) \right] e^{i2\pi \alpha_k m Z} + r(f, M, m, Z) \hfill (47)

The “noise” term r(f, M, m, Z) does not relate to resolution, but to the variability (errors) in the estimate, so we will neglect it from now on. We have the cyclic periodogram representation

\displaystyle I_Z^A(m,f) = \sum_{k=1}^{N_\alpha} \left[ d_Z(f,\alpha_k) \otimes S_x^{\alpha_k}(f) \right] e^{i2\pi \alpha_k m Z} \hfill (48)

or, in terms of the desired spectral correlation estimate

\displaystyle \hat{S}_x^\alpha (f) = \frac{1}{M} \sum_{m=0}^{M-1} I_Z^A(m,f) e^{-i 2\pi \alpha m Z} \hfill (49)

where \alpha \in \{\alpha_k\}_{k=1}^{N_\alpha}.

If the combined contributions to \hat{S}_x^\alpha(f) from S_x^{\alpha_k}(f), \alpha_k \neq \alpha, are small compared to |S_x^\alpha(f)|, then we can say we’ve resolved the spectral correlation function for cycle frequency \alpha in the cycle-frequency dimension. So, when are those contributions small?

\displaystyle \hat{S}_x^\alpha (f) = d_Z(f, \alpha)\otimes S_x^\alpha(f) + \sum_{\alpha_k\neq \alpha} \left[ d_Z(f,\alpha_k) \otimes S_x^{\alpha_k}(f) \right] \frac{1}{M} \sum_{m=0}^{M-1} e^{i2\pi(\alpha_k-\alpha)mZ} \hfill (50)

The contribution to \hat{S}_x^{\alpha_k}(f) from S_x^{\alpha_k}(f) is dependent on the following geometric series

\displaystyle L(\alpha_k) = \frac{1}{M} \sum_{m=0}^{M-1} e^{i2\pi (\alpha_k-\alpha)mZ},

which equals

\displaystyle L(\alpha_k) = \frac{1}{M} \left( \frac{1 - e^{i2\pi(\alpha_k-\alpha)MZ}}{1 - e^{i2\pi(\alpha_k-\alpha)Z}} \right) \hfill (51)

What is the width of this function? It is approximately 1/MZ, which is the total amount of processed data in the time-smoothing method under consideration, which we call T:

Figure 11.

So if this simplified analysis is correct, we once again see that the approximate cycle-frequency resolution for a second-order CSP parameter is the reciprocal of the data-block length used to form the estimate.

Let’s illustrate with some measurements.

First, we repeat the cyclic autocorrelation measurements above with the frequency-smoothing method:

Figure 12.
Figure 13.

Finally, we apply the strip spectral correlation analyzer to the single-BPSK and two-BPSK cases for various data-block lengths (recall that the two bit rates are 0.1 and 0.1001

Figure 14.
Figure 15.

There are multiple dots at the SSCA output locations because the spectral coherence can exceed threshold for multiple values of spectral frequency f for each cycle frequency \alpha.

We see that once the block length T is equal to 8192 samples, the two distinct cycle frequencies are detected, which is consistent with our approximate analysis since 0.1001 - 0.1 = 0.0001 \approx 1/8192 = 0.000122.

Resolution Properties for Second-Order Cyclostationarity: The Take-Away

If you use T seconds of data to estimate the spectral correlation function, and your processing is a blind search for cycle frequencies, you must use a cycle-frequency search grid with grid fineness of about 1/T Hz. Otherwise, you run the risk of missing significant parts of the spectral correlation function.

Same for the cyclic autocorrelation function.

If you know a cycle frequency \alpha in advance, and want to detect its presence, then if the potential error in that knowledge exceeds 1/T, you may very well declare the absence of the feature when it is actually present. This becomes increasingly important as you increase T to combat inband noise and interference.

Resolution Parameters in the Context of Higher-Order Cyclic Cumulants and Cyclic Polyspectra

This topic is advanced–it is much harder to analyze, understand, and illustrate the resolution properties of higher-order cyclic moments, cumulants, and polyspectra. So, we’ll leave it for a future post.

As always, I invite you to leave a comment if you have a question, correction, or relevant experience.

Author: Chad Spooner

I'm a signal processing researcher specializing in cyclostationary signal processing (CSP) for communication signals. I hope to use this blog to help others with their cyclo-projects and to learn more about how CSP is being used and extended worldwide.

2 thoughts on “Resolution in Time, Frequency, and Cycle Frequency for CSP Estimators”

  1. Hi Chad, Thanks for your great blog. It’s been very helpful in developing my own set of cyclostationary analysis tools. One thing I’m confused about is how the parameter choices of the SSCA affects the resolution. For the SSCA, the resolution product is N/Np, which must be >> 1. The cycle frequency resolution is 1/N, and spectral frequency resolution is 1/N_p. Suppose I have a block of N_b=N+N_p input samples. Then how should N and N_p be chosen for blind cyclic frequency detection? I’m using notation from [1]. On the signals I’ve tried, it sometimes seems difficult to get the right combination, especially for OFDM.

    I’m also interested in how averaging the SSCA bifrequency plane output for multiple, say K, blocks of N_b samples affects the estimate. Can something similar to Bartlett’s method be said in terms of decreasing the estimator variance by K, but the spectral resolution is also decreased by K? [2] I’ve also observed, on real over-the-air data, that you have to do non-coherent sums in the average.

    [1] April, Eric. On the Implementation of the Strip Spectral Correlation Algorithm for Cyclic Spectrum Estimation. In Defence Research Establishment Technical Note 94-2, 1994.
    [2] Barbe et al. “Welch Method Revisited: Nonparametric Power Spectrum Estimation Via Circular Overlap” In IEEE Transactions on Signal Processing Feb 2010

    1. Thanks for the thoughtful comment Paul. I attempt responses below.

      Suppose I have a block of N_b=N+N_p input samples. Then how should N and N_p be chosen for blind cyclic frequency detection?

      It seems that you’ve already chosen N and N_p through the equation for N_b. In general, if you have T samples, you process those T samples, which provides the cycle-frequency resolution of 1/T. You choose the number of channels N_c (also called strips) in the SSCA such that T/N_c is as large as possible. This forces you to choose small N_c. I’ve found that reliable cycle-frequency estimation can be done in a large number of cases by using a nominal N_c of 32 or 64 and using as large a T as I can.

      On the signals I’ve tried, it sometimes seems difficult to get the right combination, especially for OFDM.

      Real-world OFDM has time-varying cyclostationarity so that the cycle frequencies that you estimate can strongly depend on the particular data block that is processed. For very long blocks, you’ll see similar results for different blocks, but for shorter ones, you might have quite different results block-to-block.

      I’m also interested in how averaging the SSCA bifrequency plane output for multiple, say K, blocks of N_b samples affects the estimate.

      You mean to average the magnitude of the obtained SCF point estimates? You will get a non-coherent averaging gain for sure. Averaging the complex-valued SCF point estimates is more tricky, because the SCF phase depends on how a signal is delayed or advanced. So the averaging could end up destructive.

      I’ve also observed, on real over-the-air data, that you have to do non-coherent sums in the average.

      I’m not sure what you mean here. What real data? I find good results in straightforward application of the SSCA, FSM, and TSM to various kinds of captured communication signals such as WCDMA, LTE, CDMA, EVDO, ATSC-DTV. However, if the signal is bursty, and the bursts have random start times, then using a long data block containing multiple bursts can result in a small SCF due to the same kind of destructive interference I mentioned above. You essentially have multiple versions of the signal within the data block, and each has a different start time relative to the time origin. So short-block incoherent averaging has to be done here, or somehow the bursts have to be individually detected and aligned prior to CSP (not attractive).

Leave a Comment, Ask a Question, or Point out an Error