SCF Estimate Quality: The Resolution Product

The two non-parametric spectral-correlation estimators we’ve looked at so far–the frequency-smoothing and time-smoothing methods–require the choice of key estimator parameters. These are the total duration of the processed data block, T, and the spectral resolution F.

For the frequency-smoothing method (FSM), an FFT with length equal to the data-block length T is required, and the spectral resolution is equal to the width F of the smoothing function g(f). For the time-smoothing method (TSM), multiple FFTs with lengths T_{tsm} = T / K are required, and the frequency resolution is 1/T_{tsm} (in normalized frequency units).

The choice for the block length T is partially guided by practical concerns, such as computational cost and whether the signal is persistent or transient in nature, and partially by the desire to obtain a reliable (low-variance) spectral correlation estimate. The choice for the frequency (spectral) resolution is typically guided by the desire for a reliable estimate.

Continue reading

The Spectral Coherence Function

In this post I introduce the spectral coherence function, or just coherence. It deserves its own post because the coherence is a useful detection statistic for blindly determining significant cycle frequencies of arbitrary data records.

Let’s start with reviewing the standard correlation coefficient \rho defined for two random variables X and Y as

\rho = \displaystyle \frac{E[(X - m_X)(Y - m_Y)]}{\sigma_X \sigma_Y}, \hfill (1)

where m_X and m_Y are the mean values of X and Y, and \sigma_X and \sigma_Y are the standard deviations of X and Y. That is,

m_X = E[X] \hfill (2)

m_Y = E[Y] \hfill (3)

\sigma_X^2 = E[(X-m_X)^2] \hfill (4)

\sigma_Y^2 = E[(Y-m_Y)^2] \hfill (5)

So the correlation coefficient is the covariance between X and Y divided by the geometric mean of the variances of X and Y.

Continue reading

CSP Estimators: The Time Smoothing Method

In a previous post, we introduced the frequency-smoothing method (FSM) of spectral correlation function (SCF) estimation. The FSM convolves a pulse-like smoothing window g(f) with the cyclic periodogram to form an estimate of the SCF. An advantage of the method is that is allows fine control over the spectral resolution of the SCF estimate through the choice of g(f), but the drawbacks are that it requires a Fourier transform as long as the data-record undergoing processing, and the convolution can be expensive. However, the expense of the convolution can be mitigated by using rectangular g(f).

In this post, we introduce the time-smoothing method (TSM) of SCF estimation. Instead of averaging (smoothing) the cyclic periodogram over spectral frequency, multiple cyclic periodograms are averaged over time. When the non-conjugate cycle frequency of zero is used, this method produces an estimate of the power spectral density, and is essentially the Bartlett spectrum estimation method. The TSM can be found in My Papers [6] (Eq. (54)), and other places in the literature.

Continue reading

CSP Estimators: The Frequency-Smoothing Method

In this post I describe a basic estimator for the spectral correlation function (SCF): the frequency-smoothing method (FSM). The FSM is a way to estimate the SCF for a single value of cycle frequency. Recall from the basic theory of the cyclic autocorrelation and SCF that the SCF is obtained by infinite-time averaging of the cyclic periodogram or by infinitesimal-resolution frequency averaging of the cyclic periodogram. The FSM is merely a finite-time/finite-resolution approximation to the SCF definition.

One place the FSM can be found is in (My Papers [6]), where I introduce time-smoothed and frequency-smoothed higher-order cyclic periodograms as estimators of the cyclic polyspectrum. When the cyclic polyspectrum order is set to n = 2, the cyclic polyspectrum becomes the spectral correlation function, so the FSM discussed in this post is just a special case of the more general estimator in [6, Section VI.B].

Continue reading

Signal Selectivity

In this post I describe and illustrate the most important property of cyclostationary statistics: signal selectivity. The idea is that the cyclostationary parameters for a single signal can be estimated for that signal even when it is corrupted by strong noise and cochannel interferers. Cochannel means that the interferer occupies a frequency band that partially or completely overlaps the frequency band for the signal of interest.

A mixture of signals, whether cochannel or not, is modeled by the simple sum of the signals, as in

x(t) = s_1(t) + s_2(t) + \ldots + s_K(t) + w(t), \hfill (1)

where w(t) is additive noise. We can write this more compactly as

x(t) = \displaystyle \sum_{k=1}^K s_k(t) + w(t). \hfill (2)

Continue reading

The Spectral Correlation Function for Rectangular-Pulse BPSK

In this post, I show the non-conjugate and conjugate spectral correlation functions (SCFs) for the rectangular-pulse BPSK signal we generated in a previous post. The theoretical SCF can be analytically determined for a rectangular-pulse BPSK signal with independent and identically distributed bits (see My Papers [6] for example or The Literature [R1]). The cycle frequencies are, of course, equal to those for the CAF for rectangular-pulse BPSK. In particular, for the non-conjugate SCF, we have cycle frequencies of \alpha = k f_{bit} for all integers k, and for the conjugate SCF we have \alpha = 2f_c \pm k f_{bit}.

Continue reading

The Spectral Correlation Function

Spectral correlation is perhaps the most widely used characterization of the cyclostationarity property. The main reason is that the computational efficiency of the FFT can be harnessed to characterize the cyclostationarity of a given signal or data set in an efficient manner. And not just efficient, but with a reasonable total computational cost, so that one doesn’t have to wait too long for the result.

Just as the normal power spectrum is actually the power spectral density, or more accurately, the spectral density of time-averaged power (variance), the spectral correlation function is the spectral density of time-averaged correlation (covariance). What does this mean? Consider the following schematic showing two narrowband spectral components of an arbitrary signal:

scf_schematic

The sequence of shaded rectangles on the left are meant to imply a time-series corresponding to the output of a bandpass filter centered at f-A/2 with bandwidth B. Similarly, the sequence of shaded rectangles on the right imply a time-series corresponding to the output of a bandpass filter centered at f+A/2 with bandwidth B.

Continue reading

The Cyclic Autocorrelation for Rectangular-Pulse BPSK

The cyclic autocorrelation for rectangular-pulse BPSK can be derived as a relatively simple closed-form expression (see My Papers [6] for example or The Literature [R1]). It can be estimated in a variety of ways, which we will discuss in future posts. The non-conjugate cycle frequencies for the signal are harmonics of the bit rate, k f_{bit}, and the conjugate cycle frequencies are the non-conjugate cycle frequencies offset by the doubled carrier, or 2f_c + k f_{bit}.

Recall that the simulated rectangular-pulse BPSK signal has 10 samples per bit, or a bit rate of 0.1, and a carrier offset of 0.05, all in normalized units (meaning the sampling rate is unity). We’ve previously selected a sampling rate of 1.0 MHz to provide a little physical realism. This means the bit rate is 100 kHz and the carrier offset frequency is 50 kHz. From these numbers, we see that the non-conjugate cycle frequencies are k 100 kHz, and that the conjugate cycle frequencies are 2(50) + k 100 kHz, or $100 + k 100$ kHz.

Continue reading

The Cyclic Autocorrelation

In this post, I introduce the cyclic autocorrelation function (CAF). The easiest way to do this is to first review the conventional autocorrelation function. Suppose we have a complex-valued signal x(t) defined on a suitable probability space. Then the mean value of x(t) is given by

M_x(t, \tau) = E[x(t + \tau)]. \hfill (1)

For stationary signals, and many cyclostationary signals, this mean value is independent of the lag parameter \tau, so that

\displaystyle M_x(t, \tau_1) = M_x(t, \tau_2) = M_x(t, 0) = M_x(t). \hfill (2)

The autocorrelation function is the correlation between the random variables corresponding to two time instants of the random signal, or

\displaystyle R_x(t_1, t_2) = E[x(t_1)x^*(t_2)]. \hfill (3)

Continue reading

Creating a Simple CS Signal: Rectangular-Pulse BPSK

To test the correctness of various CSP estimators, we need a sampled signal with known cyclostationary parameters. Additionally, the signal should be easy to create and understand. A good candidate for this kind of signal is the binary phase-shift keyed (BPSK) signal with rectangular pulse function.

PSK signals with rectangular pulse functions have infinite bandwidth because the signal bandwidth is determined by the Fourier transform of the pulse, which is a sinc() function for the rectangular pulse. So the rectangular pulse is not terribly practical–infinite bandwidth is bad for other users of the spectrum. However, it is easy to generate, and its statistical properties are known.

So let’s jump in. The baseband BPSK signal is simply a sequence of binary (\pm 1) symbols convolved with the rectangular pulse. The MATLAB script make_rect_bpsk.m does this and produces the following plot:

rect_bpsk_time_domain

The signal alternates between amplitudes of +1 and -1 randomly. After frequency shifting and adding white Gaussian noise, we obtain the power spectrum estimate:

rect_bpsk_psd

The power spectrum plot shows why the rectangular-pulse BPSK signal is not popular in practice. The range of frequencies for which the signal possesses non-zero average power is infinite, so it will interfere with signals “nearby” in frequency. However, it is a good signal for us to use as a test input in all of our CSP algorithms and estimators.

The MATLAB script that creates the BPSK signal and the plots above is here. It is an m-file but I’ve stored it in a .doc file due to WordPress limitations I can’t yet get around.