More on Pure and Impure Sine Waves

Gaussian and binary signals are in some sense at opposite ends of the pure-impure sine-wave spectrum.

Remember when we derived the cumulant as the solution to the pure nth-order sine-wave problem? It sounded good at the time, I hope. But here I describe a curious special case where the interpretation of the cumulant as the pure component of a nonlinearly generated sine wave seems to break down.

Continue reading “More on Pure and Impure Sine Waves”

Cyclostationarity of Direct-Sequence Spread-Spectrum Signals

Spread-spectrum signals are used to enable shared-bandwidth communication systems (CDMA), precision position estimation (GPS), and secure wireless data transmission.

In this post we look at direct-sequence spread-spectrum (DSSS) signals, which can be usefully modeled as a kind of PSK signal. DSSS signals are used in a variety of real-world situations, including the familiar CDMA and WCDMA signals, covert signaling, and GPS. My colleague Antonio Napolitano has done some work on a large class of DSSS signals (The Literature [R11, R17, R95]), resulting in formulas for their spectral correlation functions, and I’ve made some remarks about their cyclostationary properties myself here and there (My Papers [16]).

A good thing, from the point of view of modulation recognition, about DSSS signals is that they are easily distinguished from other PSK and QAM signals by their spectral correlation functions. Whereas most PSK/QAM signals have only a single non-conjugate cycle frequency, and no conjugate cycle frequencies, DSSS signals have many non-conjugate cycle frequencies and in some cases also have many conjugate cycle frequencies.

Continue reading “Cyclostationarity of Direct-Sequence Spread-Spectrum Signals”

Machine Learning and Modulation Recognition: Comments on “Convolutional Radio Modulation Recognition Networks” by T. O’Shea, J. Corgan, and T. Clancy

Update October 2020:

Since I wrote the paper review in this post, I’ve analyzed three of O’Shea’s data sets (O’Shea is with the company DeepSig, so I’ve been referring to the data sets as DeepSig’s in other posts): All BPSK Signals, More on DeepSig’s Data Sets, and DeepSig’s 2018 Data Set. The data set relating to this paper is analyzed in All BPSK Signals. Preview: It is heavily flawed.

Continue reading “Machine Learning and Modulation Recognition: Comments on “Convolutional Radio Modulation Recognition Networks” by T. O’Shea, J. Corgan, and T. Clancy”

Modulation Recognition Using Cyclic Cumulants, Part I: Problem Description and Variants

Modulation recognition is the process of assigning one or more modulation-class labels to a provided time-series data sequence.

In this post, we start a discussion of what I consider the ultimate application of the theory of cyclostationary signals: Automatic Modulation Recognition. My relevant papers are My Papers [16,17,25,26,28,30,32,33,38,43,44]. See also my machine-learning modulation-recognition critiques by clicking on Machine Learning in the CSP Blog Categories on the right side of any post or page.

Continue reading “Modulation Recognition Using Cyclic Cumulants, Part I: Problem Description and Variants”

Cyclic Polyspectra

Higher-order statistics in the frequency domain for cyclostationary signals. As complicated as it gets at the CSP Blog.

In this post we take a first look at the spectral parameters of higher-order cyclostationarity (HOCS). In previous posts, I have introduced the topic of HOCS and have looked at the temporal parameters, such as cyclic cumulants and cyclic moments. Those temporal parameters have proven useful in modulation classification and parameter estimation settings, and will likely be an important part of my ultimate radio-frequency scene analyzer.

The spectral parameters of HOCS have not proven to be as useful as the temporal parameters unless you include the trivial case where the moment/cumulant order is equal to two. In that case, the spectral parameters reduce to the spectral correlation function, which is extremely useful in CSP (see the TDOA and signal-detection posts for examples).

Continue reading “Cyclic Polyspectra”

Comments on “Cyclostationary Correntropy: Definition and Application” by Fontes et al

Update: See also some other reviews/take-downs of cyclic correntropy on the CSP Blog here and here.


I recently came across a published paper with the title Cyclostationary Correntropy: Definition and Application, by Aluisio Fontes et al. It is published in a journal called Expert Systems with Applications (Elsevier). Actually, it wasn’t the first time I’d seen this work by these authors. I had reviewed a similar paper in 2015 for a different journal.

I was surprised to see the paper published because I had a lot of criticisms of the original paper, and the other reviewers agreed since the paper was rejected. So I did my job, as did the other reviewers, and we tried to keep a flawed paper from entering the literature, where it would stay forever causing problems for readers.

The editor(s) of the journal Expert Systems with Applications did not ask me to review the paper, so I couldn’t give them the benefit of the work I already put into the manuscript, and apparently the editor(s) did not themselves see sufficient flaws in the paper to merit rejection.

It stings, of course, when you submit a paper that you think is good, and it is rejected. But it also stings when a paper you’ve carefully reviewed, and rejected, is published anyway.

Fortunately I have the CSP Blog, so I’m going on another rant. After all, I already did this the conventional rant-free way.

Continue reading “Comments on “Cyclostationary Correntropy: Definition and Application” by Fontes et al”

100-MHz Amplitude Modulation? Comments on “Sub-Nyquist Cyclostationary Detection for Cognitive Radio” by Cohen and Eldar

I came across a paper by Cohen and Eldar, researchers at the Technion in Israel. You can get the paper on the Arxiv site here. The title is “Sub-Nyquist Cyclostationary Detection for Cognitive Radio,” and the setting is spectrum sensing for cognitive radio. I have a question about the paper that I’ll ask below.

Continue reading “100-MHz Amplitude Modulation? Comments on “Sub-Nyquist Cyclostationary Detection for Cognitive Radio” by Cohen and Eldar”

Cyclostationarity of Digital QAM and PSK

PSK and QAM signals form the building blocks for a large number of practical real-world signals. Understanding their probability structure is crucial to understanding those more complicated signals.

Let’s look into the statistical properties of a class of textbook signals that encompasses digital quadrature amplitude modulation (QAM), phase-shift keying (PSK), and pulse-amplitude modulation (PAM). I’ll call the class simply digital QAM (DQAM), and all of its members have an analytical-signal mathematical representation of the form

\displaystyle s(t) = \sum_{k=-\infty}^\infty a_k p(t - kT_0 - t_0) e^{i2\pi f_0 t + i \phi_0}. \hfill  (1)

In this model, k is the symbol index, 1/T_0 = f_{sym} is the symbol rate, f_0 is the carrier frequency (sometimes called the carrier frequency offset), t_0 is the symbol-clock phase, and \phi_0 is the carrier phase. The finite-energy function p(t) is the pulse function (sometimes called the pulse-shaping function). Finally, the random variable a_k is called the symbol, and has a discrete distribution that is called the constellation.

Model (1) is a textbook signal when the sequence of symbols is independent and identically distributed (IID). This condition rules out real-world communication aids such as periodically transmitted bursts of known symbols, adaptive modulation (where the constellation may change in response to the vagaries of the propagation channel), some forms of coding, etc. Also, when the pulse function p(t) is a rectangle (with width T_0), the signal is even less realistic, and therefore more textbooky.

We will look at the moments and cumulants of this general model in this post. Although the model is textbook, we could use it as a building block to form more realistic, less textbooky, signal models. Then we could find the cyclostationarity of those models by applying signal-processing transformation rules that define how the cumulants of the output of a signal processor relate to those for the input.

Continue reading “Cyclostationarity of Digital QAM and PSK”

Signal Processing Operations and CSP

How does the cyclostationarity of a signal change when it is subjected to common signal-processing operations like addition, multiplication, and convolution?

It is often useful to know how a signal processing operation affects the probabilistic parameters of a random signal. For example, if I know the power spectral density (PSD) of some signal x(t), and I filter it using a linear time-invariant transformation with impulse response function h(t), producing the output y(t), then what is the PSD of y(t)? This input-output relationship is well known and quite useful. The relationship is

\displaystyle S_y^0(f) = \left| H(f) \right|^2 S_x^0(f). \hfill (1)

In (1), the function H(f) is the transfer function of the filter, which is the Fourier transform of the impulse-response function h(t).

Because the mathematical models of real-world communication signals can be constructed by subjecting idealized textbook signals to various signal-processing operations, such as filtering, it is of interest to us here at the CSP Blog to know how the spectral correlation function of the output of a signal processor is related to the spectral correlation function for the input. Similarly, we’d like to know such input-output relationships for the cyclic cumulants and the cyclic polyspectra.

Another benefit of knowing these CSP input-output relationships is that they tend to build insight into the meaning of the probabilistic parameters. For example, in the PSD input-output relationship (1), we already know that the transfer function at f = f_0 scales the input frequency component at f_0 by the complex number H(f_0). So it makes sense that the PSD at f_0 is scaled by the squared magnitude of H(f_0). If the filter transfer function is zero at f_0, then the density of averaged power at f_0 should vanish too.

So, let’s look at this kind of relationship for CSP parameters. All of these results can be found, usually with more mathematical detail, in My Papers [6, 13].

Continue reading “Signal Processing Operations and CSP”

The Cycle Detectors

CSP shines when the problem involves strong noise or cochannel interference. Here we look at CSP-based signal-presence detection as a function of SNR and SIR.

Let’s take a look at a class of signal-presence detectors that exploit cyclostationarity and in doing so illustrate the good things that can happen with CSP whenever cochannel interference is present, or noise models deviate from simple additive white Gaussian noise (AWGN). I’m referring to the cycle detectors, the first CSP algorithms I ever studied (My Papers [1,4]).

Continue reading “The Cycle Detectors”

Radio-Frequency Scene Analysis

Modulation recognition is one thing, holistic radio-frequency scene analysis is quite another.

Update October 2023: RFSA is a Wicked Problem.

So why do I obsess over cyclostationary signals and cyclostationary signal processing? What’s the big deal, in the end? In this post I discuss my view of the ultimate use of cyclostationary signal processing (CSP): Radio-Frequency Scene Analysis (RFSA). Eventually, I hope to create a kind of Star Trek Tricorder for RFSA.

Continue reading “Radio-Frequency Scene Analysis”

CSP-Based Time-Difference-of-Arrival Estimation

Time-delay estimation can be used to determine the angle-of-arrival of a signal impinging on two spatially separated signals. This estimation problem gets hard when there is cochannel interference present.

Let’s discuss an application of cyclostationary signal processing (CSP): time-delay estimation. The idea is that sampled data is available from two antennas (sensors), and there is a common signal component in each data set. The signal component in one data set is the time-delayed or time-advanced version of the component in the other set. This can happen when a plane-wave radio frequency (RF) signal propagates and impinges on the two antennas. In such a case, the RF signal arrives at the sensors with a time difference proportional to the distance between the sensors along the direction of propagation, and so the time-delay estimation is also commonly referred to as time-difference-of-arrival (TDOA) estimation.

tdoa_physical_setup
Figure 1. Illustration of the geometric relationship between a transmitter and two receivers in the context of time-delay estimation (or time-difference-of-arrival estimation).

Consider the diagram shown in Figure 1. A distant transmitter emits a signal that is well-modeled as a plane wave once it reaches our two receivers. An individual wavefront of the signal arrives at the two sensors at different times.

The line segment AB is perpendicular to the direction of propagation for the RF signal. The angle \theta is called the angle of arrival (AOA). If we could estimate the AOA, we can tell the direction from which the signal arrives, which could be useful in a variety of settings. Since the triangle ABC is a right triangle, we have

\displaystyle \cos (\theta) = \frac{x}{d}. \hfill (1)

When \theta = 0, the wavefronts first strike receiver 2, then must propagate over x=d meters before striking receiver 1. On the other hand, when \theta = 90^\circ, each wavefront strikes the two receivers simultaneously. In the former case, the TDOA is maximum, and in the latter it is zero. The TDOA can be negative too, so that 180^\circ azimuthal degrees can be determined by estimating the TDOA.

In general, the wavefront must traverse x meters between striking receiver 2 and striking receiver 1,

\displaystyle x = d \cos(\theta). \hfill (2)

Assuming the speed of propagation is c meters/sec, the TDOA is given by

\displaystyle D = \frac{x}{c} = \frac{d\cos{\theta}}{c} \mbox{\rm \ \ seconds}. \hfill (3)

In this post I’ll review several methods of TDOA estimation, some of which employ CSP and some of which do not. We’ll see some of the advantages and disadvantages of the various classes of methods through inspection, simulation, and application to captured data. Consider this post as a starting point to a study or development effort rather than as a definitive performance characterization.

Continue reading “CSP-Based Time-Difference-of-Arrival Estimation”

Conjugation Configurations

Using complex-valued signal representations is convenient but also has complications: You have to consider all possible choices for conjugating different factors in a moment.

When we considered complex-valued signals and second-order statistics, we ended up with two kinds of parameters: non-conjugate and conjugate. So we have the non-conjugate autocorrelation, which is the expected value of the normal second-order lag product in which only one of the factors is conjugated (consistent with the normal definition of variance for complex-valued random variables),

\displaystyle R_x(t, \boldsymbol{\tau}) = E \left[ x(t+\tau_1)x^*(t+\tau_2) \right] \hfill (1)

and the conjugate autocorrelation, which is the expected value of the second-order lag product in which neither factor is conjugated

\displaystyle R_{x^*}(t, \boldsymbol{\tau}) = E \left[ x(t+\tau_1)x(t+\tau_2) \right]. \hfill (2)

The complex-valued Fourier-series amplitudes of these functions of time t are the non-conjugate and conjugate cyclic autocorrelation functions, respectively.

The Fourier transforms of the non-conjugate and conjugate cyclic autocorrelation functions are the non-conjugate and conjugate spectral correlation functions, respectively.

I never explained the fundamental reason why both the non-conjugate and conjugate functions are needed. In this post, I rectify that omission. The reason for the many different choices of conjugated factors in higher-order cyclic moments and cumulants is also provided. These choices of conjugation configurations, or conjugation patterns, also appear in the more conventional theory of higher-order statistics as applied to stationary signals.

Continue reading “Conjugation Configurations”

Cyclic Temporal Cumulants

Cyclic cumulants are the amplitudes of the Fourier-series components of the time-varying cumulant function for a cyclostationary signal. They degenerate to conventional cumulants when the signal is stationary.

In this post I continue the development of the theory of higher-order cyclostationarity (My Papers [5,6]) that I began here. It is largely taken from my doctoral work (download my dissertation here).

This is a long post. To make it worthwhile, I’ve placed some movies of cyclic-cumulant estimates at the end. Or just skip to the end now if you’re impatient!

In my work on cyclostationary signal processing (CSP), the most useful tools are those for estimating second-order statistics, such as the cyclic autocorrelation, spectral correlation function, and spectral coherence function. However, as we discussed in the post on Textbook Signals, there are some situations (perhaps only academic; see my question in the Textbook post) for which higher-order cyclostationarity is required. In particular, a probabilistic approach to blind modulation recognition for ideal (textbook) digital QAM, PSK, and CPM requires higher-order cyclostationarity because such signals have similar or identical spectral correlation functions and PSDs. (Other high-SNR non-probabilistic approaches can still work, such as blind constellation extraction.)

Recall that in the post introducing higher-order cyclostationarity, I mentioned that one encounters a bit of a puzzle when attempting to generalize experience with second-order cyclostationarity to higher orders. This is the puzzle of pure sine waves (My Papers [5]). Let’s look at pure and impure sine waves, and see how they lead to the probabilistic parameters widely known as cyclic cumulants.

Continue reading “Cyclic Temporal Cumulants”

SCF Estimate Quality: The Resolution Product

What factors influence the quality of a spectral correlation function estimate?

The two non-parametric spectral-correlation estimators we’ve looked at so far–the frequency-smoothing and time-smoothing methods–require the choice of key estimator parameters. These are the total duration of the processed data block, T, and the spectral resolution F.

For the frequency-smoothing method (FSM), an FFT with length equal to the data-block length T is required, and the spectral resolution is equal to the width F of the smoothing function g(f). For the time-smoothing method (TSM), multiple FFTs with lengths T_{tsm} = T / K are required, and the frequency resolution is 1/T_{tsm} (in normalized frequency units).

The choice for the block length T is partially guided by practical concerns, such as computational cost and whether the signal is persistent or transient in nature, and partially by the desire to obtain a reliable (low-variance) spectral correlation estimate. The choice for the frequency (spectral) resolution is typically guided by the desire for a reliable estimate.

Continue reading “SCF Estimate Quality: The Resolution Product”

The Spectral Coherence Function

Cross correlation functions can be normalized to create correlation coefficients. The spectral correlation function is a cross correlation and its correlation coefficient is called the coherence.

In this post I introduce the spectral coherence function, or just coherence. It deserves its own post because the coherence is a useful detection statistic for blindly determining significant cycle frequencies of arbitrary data records. See the posts on the strip spectral correlation analyzer and the FFT accumulation method for examples.

Let’s start with reviewing the standard correlation coefficient \rho defined for two random variables X and Y as

\rho = \displaystyle \frac{E[(X - m_X)(Y - m_Y)]}{\sigma_X \sigma_Y}, \hfill (1)

where m_X and m_Y are the mean values of X and Y, and \sigma_X and \sigma_Y are the standard deviations of X and Y. That is,

m_X = E[X] \hfill (2)

m_Y = E[Y] \hfill (3)

\sigma_X^2 = E[(X-m_X)^2] \hfill (4)

\sigma_Y^2 = E[(Y-m_Y)^2] \hfill (5)

So the correlation coefficient is the covariance between X and Y divided by the geometric mean of the variances of X and Y.

Continue reading “The Spectral Coherence Function”

Introduction to Higher-Order Cyclostationarity

Why do we need or care about higher-order cyclostationarity? Because second-order cyclostationarity is insufficient for our signal-processing needs in some important cases.

We’ve seen how to define second-order cyclostationarity in the time- and frequency-domains, and we’ve looked at ideal and estimated spectral correlation functions for a synthetic rectangular-pulse BPSK signal. In future posts, we’ll look at how to create simple spectral correlation estimators, but in this post I want to introduce the topic of higher-order cyclostationarity (HOCS).  This post is more conceptual in nature; for mathematical details about HOCS, see the posts on cyclic cumulants and cyclic polyspectra. Estimators of higher-order parameters, such as cyclic cumulants and cyclic moments, are discussed in this post.

To contrast with HOCS, we’ll refer to second-order parameters such as the cyclic autocorrelation and the spectral correlation function as parameters of second-order cyclostationarity (SOCS).

The first question we might ask is Why do we care about HOCS? And one answer is that SOCS does not provide all the statistical information about a signal that we might need to perform some signal-processing task. There are two main limitations of SOCS that drive us to HOCS.

Continue reading “Introduction to Higher-Order Cyclostationarity”