Second-Order Estimator Verification Guide

Use this post to help check the accuracy of your second-order CSP estimators.

Update September 2022: New section on the non-conjugate and conjugate coherence function.

***

In this post I provide some tools for the do-it-yourself CSP practitioner. One of the goals of this blog is to help new CSP researchers and students to write their own estimators and algorithms. This post contains some spectral correlation function and cyclic autocorrelation function estimates and numerically evaluated formulas that can be compared to those produced by anybody’s code.

The signal of interest is, of course, our rectangular-pulse BPSK signal with symbol rate 0.1 (normalized frequency units) and carrier offset 0.05. You can download a MATLAB script for creating such a signal here.

The formula for the SCF for a textbook BPSK signal is published in several places (The Literature [R47], My Papers [6]) and depends mainly on the Fourier transform of the pulse function used by the textbook signal.

We’ll compare the numerically evaluated spectral correlation formula with estimates produced by my version of the frequency-smoothing method (FSM). The FSM estimates and the theoretical functions are contained in a MATLAB mat file here. (I had to change the extension of the mat file from .mat to .doc to allow posting it to WordPress–change it back after downloading. It is a zipped .mat file as of 12/2/22.) In all the results shown here and that you can download, the processed data-block length is 65536 samples and the FSM smoothing width is 0.02 Hz. A rectangular smoothing window is used. For all cycle frequencies except zero (non-conjugate), a zero-padding factor of two is used in the FSM.

For the cyclic autocorrelation, we provide estimates using two methods: inverse Fourier transformation of the spectral correlation estimate and direct averaging of the second-order lag product in the time domain.

Continue reading “Second-Order Estimator Verification Guide”

Cyclic Temporal Cumulants

Cyclic cumulants are the amplitudes of the Fourier-series components of the time-varying cumulant function for a cyclostationary signal. They degenerate to conventional cumulants when the signal is stationary.

In this post I continue the development of the theory of higher-order cyclostationarity (My Papers [5,6]) that I began here. It is largely taken from my doctoral work (download my dissertation here).

This is a long post. To make it worthwhile, I’ve placed some movies of cyclic-cumulant estimates at the end. Or just skip to the end now if you’re impatient!

In my work on cyclostationary signal processing (CSP), the most useful tools are those for estimating second-order statistics, such as the cyclic autocorrelation, spectral correlation function, and spectral coherence function. However, as we discussed in the post on Textbook Signals, there are some situations (perhaps only academic; see my question in the Textbook post) for which higher-order cyclostationarity is required. In particular, a probabilistic approach to blind modulation recognition for ideal (textbook) digital QAM, PSK, and CPM requires higher-order cyclostationarity because such signals have similar or identical spectral correlation functions and PSDs. (Other high-SNR non-probabilistic approaches can still work, such as blind constellation extraction.)

Recall that in the post introducing higher-order cyclostationarity, I mentioned that one encounters a bit of a puzzle when attempting to generalize experience with second-order cyclostationarity to higher orders. This is the puzzle of pure sine waves (My Papers [5]). Let’s look at pure and impure sine waves, and see how they lead to the probabilistic parameters widely known as cyclic cumulants.

Continue reading “Cyclic Temporal Cumulants”

A Gallery of Spectral Correlation

Pictures are worth N words, and M equations, where N and M are large integers.

In this post I provide plots of the spectral correlation for a variety of simulated textbook signals and several captured communication signals. The plots show the variety of cycle-frequency patterns that arise from the disparate approaches to digital communication signaling. The distinguishability of these patterns, combined with the inability to distinguish based on the power spectrum, leads to a powerful set of classification (modulation recognition) features (My Papers [16, 25, 26, 28]).

In all cases, the cycle frequencies are blindly estimated by the strip spectral correlation analyzer (The Literature [R3, R4]) and the estimates used by the FSM to compute the spectral correlation function. MATLAB is then used to plot the magnitude of the spectral correlation and conjugate spectral correlation, as specified by the determined non-conjugate and conjugate cycle frequencies.

There are three categories of signal types in this gallery: textbook signals, captured signals, and feature-rich signals. The latter comprises some captured signals (e.g., LTE) and some simulated radar signals. For the first two signal categories, the three-dimensional surface plots I’ve been using will suffice for illustrating the cycle-frequency patterns and the behavior of the spectral correlation function over frequency. But for the last category, the number of cycle frequencies is so large that the three-dimensional surface is difficult to interpret–it is a visual mess. For these signals, I’ll plot the maximum spectral correlation magnitude over spectral frequency f versus the detected cycle frequency \alpha (as in this post).

A complementary gallery of cyclic autocorrelation functions can be found here.

Continue reading “A Gallery of Spectral Correlation”

Textbook Signals

Yes, the CSP Blog uses the simplest idealized cyclostationary digital signal–rectangular-pulse BPSK–to connect all the different aspects of CSP. But don’t mistake these ‘textbook’ signals for the real world.

What good is having a blog if you can’t offer a rant every once in a while? In this post I talk about what I call textbook signals, which are mathematical models of communication signals that are used by many researchers in statistical signal processing for communications.

We’ve already encountered, and used frequently, the most common textbook signal of all: rectangular-pulse BPSK with independent and identically distributed (IID) bits. We’ve been using this signal to illustrate the cyclostationary signal processing concepts and estimators as they have been introduced. It’s a good choice from the point of view of consistency of all the posts and it is easy to generate and to understand. However, it is not a good choice from the perspective of realism. It is rare to encounter a textbook BPSK signal in the practice of signal processing for communications.

I use the term textbook because the textbook signals can be found in standard textbooks, such as Proakis (The Literature [R44]). Textbook signals stand in opposition to signals used in the world, such as OFDM in LTE, slotted GMSK in GSM, 8PAM VSB with synchronization bits in ATSC-DTV, etc.

Typical communication signals combine a textbook signal with an access mechanism to yield the final physical-layer signal–the signal that is actually transmitted (My Papers [11], [16]). What is important for us, here at the CSP Blog, is that this combination usually results in a signal with radically different cyclostationarity than the textbook component. So it is not enough to understand textbook signals’ cyclostationarity. We must also understand the cyclostationarity of the real-world signal, which may be sufficiently complex to render mathematical modeling and analysis impossible (at least for me). (See also some relevant examples of real-world signals here and here.)

Continue reading “Textbook Signals”

SCF Estimate Quality: The Resolution Product

What factors influence the quality of a spectral correlation function estimate?

The two non-parametric spectral-correlation estimators we’ve looked at so far–the frequency-smoothing and time-smoothing methods–require the choice of key estimator parameters. These are the total duration of the processed data block, T, and the spectral resolution F.

For the frequency-smoothing method (FSM), an FFT with length equal to the data-block length T is required, and the spectral resolution is equal to the width F of the smoothing function g(f). For the time-smoothing method (TSM), multiple FFTs with lengths T_{tsm} = T / K are required, and the frequency resolution is 1/T_{tsm} (in normalized frequency units).

The choice for the block length T is partially guided by practical concerns, such as computational cost and whether the signal is persistent or transient in nature, and partially by the desire to obtain a reliable (low-variance) spectral correlation estimate. The choice for the frequency (spectral) resolution is typically guided by the desire for a reliable estimate.

Continue reading “SCF Estimate Quality: The Resolution Product”

The Spectral Coherence Function

Cross correlation functions can be normalized to create correlation coefficients. The spectral correlation function is a cross correlation and its correlation coefficient is called the coherence.

In this post I introduce the spectral coherence function, or just coherence. It deserves its own post because the coherence is a useful detection statistic for blindly determining significant cycle frequencies of arbitrary data records. See the posts on the strip spectral correlation analyzer and the FFT accumulation method for examples.

Let’s start with reviewing the standard correlation coefficient \rho defined for two random variables X and Y as

\rho = \displaystyle \frac{E[(X - m_X)(Y - m_Y)]}{\sigma_X \sigma_Y}, \hfill (1)

where m_X and m_Y are the mean values of X and Y, and \sigma_X and \sigma_Y are the standard deviations of X and Y. That is,

m_X = E[X] \hfill (2)

m_Y = E[Y] \hfill (3)

\sigma_X^2 = E[(X-m_X)^2] \hfill (4)

\sigma_Y^2 = E[(Y-m_Y)^2] \hfill (5)

So the correlation coefficient is the covariance between X and Y divided by the geometric mean of the variances of X and Y.

Continue reading “The Spectral Coherence Function”

CSP Estimators: The Time Smoothing Method

The non-blind spectral-correlation estimator called the TSM is favored when one wishes to avoid long FFTs.

In a previous post, we introduced the frequency-smoothing method (FSM) of spectral correlation function (SCF) estimation. The FSM convolves a pulse-like smoothing window g(f) with the cyclic periodogram to form an estimate of the SCF. An advantage of the method is that is allows fine control over the spectral resolution of the SCF estimate through the choice of g(f), but the drawbacks are that it requires a Fourier transform as long as the data-record undergoing processing, and the convolution can be expensive. However, the expense of the convolution can be mitigated by using rectangular g(f).

In this post, we introduce the time-smoothing method (TSM) of SCF estimation. Instead of averaging (smoothing) the cyclic periodogram over spectral frequency, multiple cyclic periodograms are averaged over time. When the non-conjugate cycle frequency of zero is used, this method produces an estimate of the power spectral density, and is essentially the Bartlett spectrum estimation method. The TSM can be found in My Papers [6] (Eq. (54)), and other places in the literature.

Continue reading “CSP Estimators: The Time Smoothing Method”

CSP Estimators: The Frequency-Smoothing Method

The non-blind spectral-correlation estimator called the FSM is favored when one wishes to have fine control over frequency resolution and can tolerate long FFTs.

In this post I describe a basic estimator for the spectral correlation function (SCF): the frequency-smoothing method (FSM). The FSM is a way to estimate the SCF for a single value of cycle frequency. Recall from the basic theory of the cyclic autocorrelation and SCF that the SCF is obtained by infinite-time averaging of the cyclic periodogram or by infinitesimal-resolution frequency averaging of the cyclic periodogram. The FSM is merely a finite-time/finite-resolution approximation to the SCF definition.

One place the FSM can be found is in (My Papers [6]), where I introduce time-smoothed and frequency-smoothed higher-order cyclic periodograms as estimators of the cyclic polyspectrum. When the cyclic polyspectrum order is set to n = 2, the cyclic polyspectrum becomes the spectral correlation function, so the FSM discussed in this post is just a special case of the more general estimator in [6, Section VI.B].

Continue reading “CSP Estimators: The Frequency-Smoothing Method”

Signal Selectivity

We can estimate the spectral correlation function of one signal in the presence of another with complete temporal and spectral overlap provided the signal has a unique cycle frequency.

In this post I describe and illustrate the most important property of cyclostationary statistics: signal selectivity. The idea is that the cyclostationary parameters for a single signal can be estimated for that signal even when it is corrupted by strong noise and cochannel interferers. ‘Cochannel’ means that the interferer occupies a frequency band that partially or completely overlaps the frequency band for the signal of interest.

A mixture of received RF signals, whether cochannel or not, is accurately modeled by the simple sum of the signals, as in

x(t) = s_1(t) + s_2(t) + \ldots + s_K(t) + w(t), \hfill (1)

where w(t) is additive noise. We can write this more compactly as

x(t) = \displaystyle \sum_{k=1}^K s_k(t) + w(t). \hfill (2)

Continue reading “Signal Selectivity”

Introduction to Higher-Order Cyclostationarity

Why do we need or care about higher-order cyclostationarity? Because second-order cyclostationarity is insufficient for our signal-processing needs in some important cases.

We’ve seen how to define second-order cyclostationarity in the time- and frequency-domains, and we’ve looked at ideal and estimated spectral correlation functions for a synthetic rectangular-pulse BPSK signal. In future posts, we’ll look at how to create simple spectral correlation estimators, but in this post I want to introduce the topic of higher-order cyclostationarity (HOCS).  This post is more conceptual in nature; for mathematical details about HOCS, see the posts on cyclic cumulants and cyclic polyspectra. Estimators of higher-order parameters, such as cyclic cumulants and cyclic moments, are discussed in this post.

To contrast with HOCS, we’ll refer to second-order parameters such as the cyclic autocorrelation and the spectral correlation function as parameters of second-order cyclostationarity (SOCS).

The first question we might ask is Why do we care about HOCS? And one answer is that SOCS does not provide all the statistical information about a signal that we might need to perform some signal-processing task. There are two main limitations of SOCS that drive us to HOCS.

Continue reading “Introduction to Higher-Order Cyclostationarity”

The Spectral Correlation Function for Rectangular-Pulse BPSK

Let’s make the spectral correlation function a little less abstract by showing it for a simple textbook BPSK signal.

In this post, I show the non-conjugate and conjugate spectral correlation functions (SCFs) for the rectangular-pulse BPSK signal we generated in a previous post. The theoretical SCF can be analytically determined for a rectangular-pulse BPSK signal with independent and identically distributed bits (see My Papers [6] for example or The Literature [R1]). The cycle frequencies are, of course, equal to those for the CAF for rectangular-pulse BPSK. In particular, for the non-conjugate SCF, we have cycle frequencies of \alpha = k f_{bit} for all integers k, and for the conjugate SCF we have \alpha = 2f_c \pm k f_{bit}.

Continue reading “The Spectral Correlation Function for Rectangular-Pulse BPSK”

The Spectral Correlation Function

Spectral correlation in CSP means that distinct narrowband spectral components of a signal are correlated-they contain either identical information or some degree of redundant information.

Spectral correlation is perhaps the most widely used characterization of the cyclostationarity property. The main reason is that the computational efficiency of the FFT can be harnessed to characterize the cyclostationarity of a given signal or data set in an efficient manner. And not just efficient, but with a reasonable total computational cost, so that one doesn’t have to wait too long for the result.

Just as the normal power spectrum is actually the power spectral density, or more accurately, the spectral density of time-averaged power (or simply the variance when the mean is zero), the spectral correlation function is the spectral density of time-averaged correlation (covariance). What does this mean? Consider the following schematic showing two narrowband spectral components of an arbitrary signal:

scf_schematic
Figure 1. Illustration of the concept of spectral correlation. The time series represented by the narrowband spectral components centered at f-A/2 and f+A/2 are downconverted to zero frequency and their correlation is measured. When A=0, the result is the power spectral density function, otherwise it is referred to as the spectral correlation function. It is non-zero only for a countable set of numbers \{A\}, which are equal to the frequencies of sine waves that can be generated by quadratically transforming the data.

Let’s define narrowband spectral component to mean the output of a bandpass filter applied to a signal, where the bandwidth of the filter is much smaller than the bandwidth of the signal.

The sequence of shaded rectangles on the left are meant to imply a time series corresponding to the output of a bandpass filter centered at f-A/2 with bandwidth B. Similarly, the sequence of shaded rectangles on the right imply a time series corresponding to the output of a bandpass filter centered at f+A/2 with bandwidth B.

Continue reading “The Spectral Correlation Function”

The Cyclic Autocorrelation for Rectangular-Pulse BPSK

Let’s look at a specific example of the cyclic autocorrelation function: the textbook rectangular-pulse BPSK signal with IID symbols.

The cyclic autocorrelation function (CAF) for rectangular-pulse BPSK can be derived as a relatively simple closed-form expression (see My Papers [6] for example or The Literature [R1]). It can be estimated in a variety of ways, which we will discuss in future posts. The non-conjugate cycle frequencies for the signal are harmonics of the bit rate, k f_{bit}, and the conjugate cycle frequencies are the non-conjugate cycle frequencies offset by the doubled carrier, or 2f_c + k f_{bit}.

Recall that our simulated rectangular-pulse BPSK signal has 10 samples per bit, or a bit rate of 0.1, and a carrier offset of 0.05, all in normalized units (meaning the sampling rate is unity). We’ve previously selected a sampling rate of 1.0 MHz to provide a little physical realism; let’s do that here too. This choice means the bit rate is 100 kHz and the carrier offset frequency is 50 kHz. From these numbers, we see that the non-conjugate cycle frequencies are k 100 kHz, and that the conjugate cycle frequencies are 2(50) + k 100 kHz, or 100 + k 100 kHz.

Continue reading “The Cyclic Autocorrelation for Rectangular-Pulse BPSK”

The Cyclic Autocorrelation Function

The cyclic autocorrelation function is the amplitude of a Fourier-series component of the time-varying autocorrelation for a cyclostationary signal.

In this post, I introduce the cyclic autocorrelation function (CAF). The easiest way to do this is to first review the conventional autocorrelation function. Suppose we have a complex-valued signal x(t) defined on a suitable probability space. Then the mean value of x(t) is given by

M_x(t, \tau) = E[x(t + \tau)]. \hfill (1)

For stationary signals, and many cyclostationary signals, this mean value is independent of the lag parameter \tau, so that

\displaystyle M_x(t, \tau_1) = M_x(t, \tau_2) = M_x(t, 0) = M_x. \hfill (2)

The autocorrelation function is the correlation between the random variables corresponding to two time instants of the random signal, or

\displaystyle R_x(t_1, t_2) = E[x(t_1)x^*(t_2)]. \hfill (3)

Continue reading “The Cyclic Autocorrelation Function”

Creating a Simple CS Signal: Rectangular-Pulse BPSK

We’ll use this simple textbook signal throughout the CSP Blog to illustrate and tie together all the different aspects of CSP.

To test the correctness of various CSP estimators, we need a sampled signal with known cyclostationary parameters. Additionally, the signal should be easy to create and understand. A good candidate for this kind of signal is the binary phase-shift keyed (BPSK) signal with rectangular pulse function.

PSK signals with rectangular pulse functions have infinite bandwidth because the signal bandwidth is determined by the Fourier transform of the pulse, which is a sinc() function for the rectangular pulse. So the rectangular pulse is not terribly practical–infinite bandwidth is bad for other users of the spectrum. However, it is easy to generate, and its statistical properties are known.

So let’s jump in. The baseband BPSK signal is simply a sequence of binary (\pm 1) symbols convolved with the rectangular pulse. The MATLAB script make_rect_bpsk.m does this and produces the following plot:

rect_bpsk_time_domain
Figure 1. Time-domain plot of a baseband (not yet modulated by a carrier) rectangular-pulse BPSK signal with bit rate 1/10.

The signal alternates between amplitudes of +1 and -1 randomly. After frequency shifting and adding white Gaussian noise, we obtain the power spectrum estimate:

rect_bpsk_psd
Figure 2. Power spectrum estimate for a simulated rectangular-pulse BPSK signal in noise. The signal power is unity, or 0 dB, and the noise power is 1/10, or -10 dB. The bit rate is 1/10 and the carrier offset frequency is 0.05. Note that the nulls (minima) of the signal spectrum are at 0.05 \pm k/10, or harmonics of the bit rate offset by the carrier.

The power spectrum plot shows why the rectangular-pulse BPSK signal is not popular in practice. The range of frequencies for which the signal possesses non-zero average power is infinite, so it will interfere with signals “nearby” in frequency. However, it is a good signal for us to use as a test input in all of our CSP algorithms and estimators.

The MATLAB script that creates the BPSK signal and the plots above is here. It is an m-file but I’ve stored it in a .doc file due to WordPress limitations I can’t yet get around.

Welcome to the CSP Blog!

image000003_crop

Thank you for visiting the CSP blog.

The purpose of this blog is to talk about cyclostationary signals and cyclostationary signal processing (CSP). I’ve been working in the area for nearly thirty years, and over that time I’ve received a lot of requests for help with CSP code and algorithms. I thought it was time to put some of the basics out on the web so everybody could benefit. And I’m hoping to learn from you too.

ww_1

In future posts, I’ll be showing how to create simple cyclostationary signals, write code for basic CSP estimators and detectors, and discuss papers in the literature.

What is cyclostationarity? It is a property of a class of mathematical models for a large number of signals in the world, most notably man-made modulated radio-frequency signals, like those used by cell phones, broadcast AM/FM/TV, satellites, WiFi modems, and many more systems. The mathematical models can be quite accurate, so we also say that cyclostationarity is a property of the real-world signals themselves.

The key aspect of the model is that cyclostationary signals have probabilistic parameters that vary periodically with time. Traditionally, signals are treated as stationary, which means their parameters do not vary with time. What are these ‘probabilistic parameters?’ Quantities like the mean value, the variance, and higher-order moments. These quantities are defined for both the time-domain signal and for its frequency-domain representation. So we have ‘temporal moments‘ and ‘spectral moments.’ The second-order spectral moment is also called the spectral correlation function (SCF). The SCF is central to many CSP algorithms; a display of the SCF is shown above for a simple bandlimited binary phase-shift keyed (BPSK) signal.

The well-known noise- and interference-tolerance properties of CSP algorithms follow from the periodically time-varying nature of the signals’ parameters.

The most common difficulty I’ve encountered is when a researcher is developing a CSP estimator and is having trouble applying it to their data set. The researchers almost always skip the step of first applying the estimator to a signal with a perfectly known cyclostationary parameters. So, in the next post I’ll describe how to make the simplest digital CS signal, which has known temporal and spectral moments of all orders, so that we can test CSP estimators by comparing their output to the known correct result.

I encourage readers to point out my errors in the comments of my posts and to suggest topics they would like to see covered in future posts. Also, let me know about your application and interests so I can continue to learn too.

I hope you enjoy your time here at the CSP Blog!

Support the CSP Blog and Keep it Ad-Free

Please consider donating to the CSP Blog to keep it ad-free and to support the addition of major new features. The small box below is used to specify the number of $5 donations.

$5.00