Professor Jang Again Tortures CSP Mathematics Until it Breaks

In which my life is made a little harder.

We first met Professor Jang in a “Comments on the Literature” type of post from 2016. In that post, I pointed out fundamental mathematical errors contained in a paper the Professor published in the IEEE Communications Letters in 2014 (The Literature [R71]).

I have just noticed a new paper by Professor Jang, published in the journal IEEE Access, which is a peer-reviewed journal, like the Communications Letters. This new paper is titled “Simultaneous Power Harvesting and Cyclostationary Spectrum Sensing in Cognitive Radios” (The Literature [R144]). Many of the same errors are present in this paper. In fact, the beginning of the paper, and the exposition on cyclostationary signal processing is nearly the same as in The Literature [R71].

Let’s take a look.

Continue reading “Professor Jang Again Tortures CSP Mathematics Until it Breaks”

On Impulsive Noise, CSP, and Correntropy

And I still don’t understand how a random variable with infinite variance can be a good model for anything physical. So there.

I’ve seen several published and pre-published (arXiv.org) technical papers over the past couple of years on the topic of cyclic correntropy (The Literature [R123-R127]). I first criticized such a paper ([R123]) here, but the substance of that review was about my problems with the presented mathematics, not impulsive noise and its effects on CSP. Since the papers keep coming, apparently, I’m going to put down some thoughts on impulsive noise and some evidence regarding simple means of mitigation in the context of CSP. Preview: I don’t think we need to go to the trouble of investigating cyclic correntropy as a means of salvaging CSP from the evil clutches of impulsive noise.

Continue reading “On Impulsive Noise, CSP, and Correntropy”

For the Beginner at CSP

Here is a list of links to CSP Blog posts that I think are suitable for a beginner: read them in the order given.

How to Obtain Help from the CSP Blog

Introduction to CSP

How to Create a Simple Cyclostationary Signal: Rectangular-Pulse BPSK

The Cyclic Autocorrelation Function

The Spectral Correlation Function

The Cyclic Autocorrelation for BPSK

Continue reading “For the Beginner at CSP”

MATLAB’s SSCA: commP25ssca.m

In this short post, I describe some errors that are produced by MATLAB’s strip spectral correlation analyzer function commP25ssca.m. I don’t recommend that you use it; far better to create your own function.

Continue reading “MATLAB’s SSCA: commP25ssca.m”

Comments on “Detection of Almost-Cyclostationarity: An Approach Based on a Multiple Hypothesis Test” by S. Horstmann et al

The statistics-oriented wing of electrical engineering is perpetually dazzled by [insert Revered Person]’s Theorem at the expense of, well, actual engineering.

I recently came across the conference paper in the post title (The Literature [R101]). Let’s take a look.

The paper is concerned with “detect[ing] the presence of ACS signals with unknown cycle period.” In other words, blind cyclostationary-signal detection and cycle-frequency estimation. Of particular importance to the authors is the case in which the “period of cyclostationarity” is not equal to an integer number of samples. They seem to think this is a new and difficult problem. By my lights, it isn’t. But maybe I’m missing something. Let me know in the Comments.

Continue reading “Comments on “Detection of Almost-Cyclostationarity: An Approach Based on a Multiple Hypothesis Test” by S. Horstmann et al”

CSP Estimators: The FFT Accumulation Method

An alternative to the strip spectral correlation analyzer.

Let’s look at another spectral correlation function estimator: the FFT Accumulation Method (FAM). This estimator is in the time-smoothing category, is exhaustive in that it is designed to compute estimates of the spectral correlation function over its entire principal domain, and is efficient, so that it is a competitor to the Strip Spectral Correlation Analyzer (SSCA) method. I implemented my version of the FAM by using the paper by Roberts et al (The Literature [R4]). If you follow the equations closely, you can successfully implement the estimator from that paper. The tricky part, as with the SSCA, is correctly associating the outputs of the coded equations to their proper \displaystyle (f, \alpha) values.

Continue reading “CSP Estimators: The FFT Accumulation Method”

Computational Costs for Spectral Correlation Estimators

The costs strongly depend on whether you have prior cycle-frequency information or not.

Let’s look at the computational costs for spectral-correlation analysis using the three main estimators I’ve previously described on the CSP Blog: the frequency-smoothing method (FSM), the time-smoothing method (TSM), and the strip spectral correlation analyzer (SSCA).

We’ll see that the FSM and TSM are the low-cost options when estimating the spectral correlation function for a few cycle frequencies and that the SSCA is the low-cost option when estimating the spectral correlation function for many cycle frequencies. That is, the TSM and FSM are good options for directed analysis using prior information (values of cycle frequencies) and the SSCA is a good option for exhaustive blind analysis, for which there is no prior information available.

Continue reading “Computational Costs for Spectral Correlation Estimators”

Resolution in Time, Frequency, and Cycle Frequency for CSP Estimators

Unlike conventional spectrum analysis for stationary signals, CSP has three kinds of resolutions that must be considered in all CSP applications, not just two.

In this post, we look at the ability of various CSP estimators to distinguish cycle frequencies, temporal changes in cyclostationarity, and spectral features. These abilities are quantified by the resolution properties of CSP estimators.

Resolution Parameters in CSP: Preview

Consider performing some CSP estimation task, such as using the frequency-smoothing method, time-smoothing method, or strip spectral correlation analyzer method of estimating the spectral correlation function. The estimate employs T seconds of data.

Then the temporal resolution \Delta t of the estimate is approximately T, the cycle-frequency resolution \Delta \alpha is about 1/T, and the spectral resolution \Delta f depends strongly on the particular estimator and its parameters. The resolution product \Delta f \Delta t was discussed in this post. The fundamental result for the resolution product is that it must be very much larger than unity in order to obtain an SCF estimate with low variance.

Continue reading “Resolution in Time, Frequency, and Cycle Frequency for CSP Estimators”

Automatic Spectral Segmentation

Radio-frequency scene analysis is much more complex than modulation recognition. A good first step is to blindly identify the frequency intervals for which significant non-noise energy exists.

In this post, I discuss a signal-processing algorithm that has almost nothing to do with cyclostationary signal processing (CSP). Almost. The topic is automatic spectral segmentation, which I also call band-of-interest (BOI) detection. When attempting to perform automatic radio-frequency scene analysis (RFSA), we may be confronted with a data block that contains multiple signals in a number of distinct frequency subbands. Moreover, these signals may be turning on and off within the data block. To apply our cyclostationary signal processing tools effectively, we would like to isolate these signals in time and frequency to the greatest extent possible using linear time-invariant filtering (for separating in the frequency dimension) and time-gating (for separating in the time dimension). Then the isolated signal components can be processed serially using CSP.

It is very important to remember that even perfect spectral and temporal segmentation will not solve the cochannel-signal problem. It is perfectly possible that an isolated subband will contain more than one cochannel signal.

The basics of my BOI-detection approach are published in a 2007 conference paper (My Papers [32]). I’ll describe this basic approach, illustrate it with examples relevant to RFSA, and also provide a few extensions of interest, including one that relates to cyclostationary signal processing.

Continue reading “Automatic Spectral Segmentation”

Comments on “Blind Cyclostationary Spectrum Sensing in Cognitive Radios” by W. M. Jang

We are all susceptible to using bad mathematics to get us where we want to go. Here is an example.

I recently came across the 2014 paper in the title of this post. I mentioned it briefly in the post on the periodogram. But I’m going to talk about it a bit more here because this is the kind of thing that makes things harder for people trying to learn about cyclostationarity, which eventually leads to the need for something like the CSP Blog as a corrective.

The idea behind the paper is that it would be nice to avoid the need for prior knowledge of cycle frequencies when using cycle detectors or the like. If you could just compute the entire spectral correlation function, then collapse it by integrating (summing) over frequency f, then you’d have a one-dimensional function of cycle frequency \alpha and you could then process that function inexpensively to perform detection and classification tasks.

Continue reading “Comments on “Blind Cyclostationary Spectrum Sensing in Cognitive Radios” by W. M. Jang”

The Periodogram

The periodogram is the scaled magnitude-squared finite-time Fourier transform of something. It is as random as its input–it never converges to the power spectrum.

I’ve been reviewing a lot of technical papers lately and I’m noticing that it is becoming common to assert that the limiting form of the periodogram is the power spectral density or that the limiting form of the cyclic periodogram is the spectral correlation function. This isn’t true. These functions do not become, in general, less random (erratic) as the amount of data that is processed increases without limit. On the contrary, they always have large variance. Some form of averaging (temporal or spectral) is needed to permit the periodogram to converge to the power spectrum or the cyclic periodogram to converge to the spectral correlation function (SCF).

In particular, I’ve been seeing things like this:

\displaystyle S_x^\alpha(f) = \lim_{T\rightarrow\infty} \frac{1}{T} X_T(f+\alpha/2) X_T^*(f-\alpha/2), \hfill (1)

where X_T(f+\alpha/2) is the Fourier transform of x(t) on t \in [-T/2, T/2]. In other words, the usual cyclic periodogram we talk about here on the CSP blog. See, for example, The Literature [R71], Equation (3).

Continue reading “The Periodogram”

100-MHz Amplitude Modulation? Comments on “Sub-Nyquist Cyclostationary Detection for Cognitive Radio” by Cohen and Eldar

I came across a paper by Cohen and Eldar, researchers at the Technion in Israel. You can get the paper on the Arxiv site here. The title is “Sub-Nyquist Cyclostationary Detection for Cognitive Radio,” and the setting is spectrum sensing for cognitive radio. I have a question about the paper that I’ll ask below.

Continue reading “100-MHz Amplitude Modulation? Comments on “Sub-Nyquist Cyclostationary Detection for Cognitive Radio” by Cohen and Eldar”

Radio-Frequency Scene Analysis

Modulation recognition is one thing, holistic radio-frequency scene analysis is quite another.

So why do I obsess over cyclostationary signals and cyclostationary signal processing? What’s the big deal, in the end? In this post I discuss my view of the ultimate use of cyclostationary signal processing (CSP): Radio-Frequency Scene Analysis (RFSA). Eventually, I hope to create a kind of Star Trek Tricorder for RFSA.

Continue reading “Radio-Frequency Scene Analysis”

Square-Root Raised-Cosine PSK/QAM

SRRC PSK and QAM signals form important components of more complicated real-world communication signals. Let’s look at their second-order cyclostationarity here.

Let’s look at a somewhat more realistic textbook signal: The PSK/QAM signal with independent and identically distributed symbols (IID) and a square-root raised-cosine (SRRC) pulse function. The SRRC pulse is used in many practical systems and in many theoretical and simulation studies. In this post, we’ll look at how the free parameter of the pulse function, called the roll-off parameter or excess bandwidth parameter, affects the power spectrum and the spectral correlation function.

Continue reading “Square-Root Raised-Cosine PSK/QAM”

CSP Estimators: The Strip Spectral Correlation Analyzer

The SSCA is a good tool for blind (no prior information) exhaustive (all cycle frequencies) spectral correlation analysis. An alternative is the FFT accumulation method.

In this post I present a very useful blind cycle-frequency estimator known in the literature as the strip spectral correlation analyzer (SSCA) (The Literature [R3-R5]). We’ve covered the basics of the frequency-smoothing method (FSM) and the time-smoothing method (TSM) of estimating the spectral correlation function (SCF) in previous posts. The TSM and FSM are efficient estimators of the SCF when it is desired to estimate it for one or a few cycle frequencies (CFs). The SSCA, on the other hand, is efficient when we want to estimate the SCF for all CFs.

See also an alternate method of efficient exhaustive SCF estimation: The FFT Accumulation Method.

Continue reading “CSP Estimators: The Strip Spectral Correlation Analyzer”

Second-Order Estimator Verification Guide

Use this post to help check the accuracy of your second-order CSP estimators.

In this post I provide some tools for the do-it-yourself CSP practitioner. One of the goals of this blog is to help new CSP researchers and students to write their own estimators and algorithms. This post contains some spectral correlation function and cyclic autocorrelation function estimates and numerically evaluated formulas that can be compared to those produced by anybody’s code.

The signal of interest is, of course, our rectangular-pulse BPSK signal with symbol rate 0.1 (normalized frequency units) and carrier offset 0.05. You can download a MATLAB script for creating such a signal here.

The formula for the SCF for a textbook BPSK signal is published in several places (The Literature [R47], My Papers [6]) and depends mainly on the Fourier transform of the pulse function used by the textbook signal.

We’ll compare the numerically evaluated spectral correlation formula with estimates produced by my version of the frequency-smoothing method (FSM). The FSM estimates and the theoretical functions are contained in a MATLAB mat file here. (I had to change the extension of the mat file from .mat to .doc to allow posting it to WordPress–change it back after downloading.) In all the results shown here and that you can download, the processed data-block length is 65536 samples and the FSM smoothing width is 0.02 Hz. A rectangular smoothing window is used. For all cycle frequencies except zero (non-conjugate), a zero-padding factor of two is used in the FSM.

For the cyclic autocorrelation, we provide estimates using two methods: inverse Fourier transformation of the spectral correlation estimate and direct averaging of the second-order lag product in the time domain.

Continue reading “Second-Order Estimator Verification Guide”

A Gallery of Spectral Correlation

Pictures are worth N words, and M equations, where N and M are large integers.

In this post I provide plots of the spectral correlation for a variety of simulated textbook signals and several captured communication signals. The plots show the variety of cycle-frequency patterns that arise from the disparate approaches to digital communication signaling. The distinguishability of these patterns, combined with the inability to distinguish based on the power spectrum, leads to a powerful set of classification (modulation recognition) features (My Papers [16, 25, 26, 28]).

In all cases, the cycle frequencies are blindly estimated by the strip spectral correlation analyzer (The Literature [R3, R4]) and the estimates used by the FSM to compute the spectral correlation function. MATLAB is then used to plot the magnitude of the spectral correlation and conjugate spectral correlation, as specified by the determined non-conjugate and conjugate cycle frequencies.

There are three categories of signal types in this gallery: textbook signals, captured signals, and feature-rich signals. The latter comprises some captured signals (e.g., LTE) and some simulated radar signals. For the first two signal categories, the three-dimensional surface plots I’ve been using will suffice for illustrating the cycle-frequency patterns and the behavior of the spectral correlation function over frequency. But for the last category, the number of cycle frequencies is so large that the three-dimensional surface is difficult to interpret–it is a visual mess. For these signals, I’ll plot the maximum spectral correlation magnitude over spectral frequency f versus the detected cycle frequency \alpha (as in this post).

A complementary gallery of cyclic autocorrelation functions can be found here.

Continue reading “A Gallery of Spectral Correlation”

SCF Estimate Quality: The Resolution Product

What factors influence the quality of a spectral correlation function estimate?

The two non-parametric spectral-correlation estimators we’ve looked at so far–the frequency-smoothing and time-smoothing methods–require the choice of key estimator parameters. These are the total duration of the processed data block, T, and the spectral resolution F.

For the frequency-smoothing method (FSM), an FFT with length equal to the data-block length T is required, and the spectral resolution is equal to the width F of the smoothing function g(f). For the time-smoothing method (TSM), multiple FFTs with lengths T_{tsm} = T / K are required, and the frequency resolution is 1/T_{tsm} (in normalized frequency units).

The choice for the block length T is partially guided by practical concerns, such as computational cost and whether the signal is persistent or transient in nature, and partially by the desire to obtain a reliable (low-variance) spectral correlation estimate. The choice for the frequency (spectral) resolution is typically guided by the desire for a reliable estimate.

Continue reading “SCF Estimate Quality: The Resolution Product”

The Spectral Coherence Function

Cross correlation functions can be normalized to create correlation coefficients. The spectral correlation function is a cross correlation and its correlation coefficient is called the coherence.

In this post I introduce the spectral coherence function, or just coherence. It deserves its own post because the coherence is a useful detection statistic for blindly determining significant cycle frequencies of arbitrary data records. See the posts on the strip spectral correlation analyzer and the FFT accumulation method for examples.

Let’s start with reviewing the standard correlation coefficient \rho defined for two random variables X and Y as

\rho = \displaystyle \frac{E[(X - m_X)(Y - m_Y)]}{\sigma_X \sigma_Y}, \hfill (1)

where m_X and m_Y are the mean values of X and Y, and \sigma_X and \sigma_Y are the standard deviations of X and Y. That is,

m_X = E[X] \hfill (2)

m_Y = E[Y] \hfill (3)

\sigma_X^2 = E[(X-m_X)^2] \hfill (4)

\sigma_Y^2 = E[(Y-m_Y)^2] \hfill (5)

So the correlation coefficient is the covariance between X and Y divided by the geometric mean of the variances of X and Y.

Continue reading “The Spectral Coherence Function”

CSP Estimators: The Time Smoothing Method

The non-blind spectral-correlation estimator called the TSM is favored when one wishes to avoid long FFTs.

In a previous post, we introduced the frequency-smoothing method (FSM) of spectral correlation function (SCF) estimation. The FSM convolves a pulse-like smoothing window g(f) with the cyclic periodogram to form an estimate of the SCF. An advantage of the method is that is allows fine control over the spectral resolution of the SCF estimate through the choice of g(f), but the drawbacks are that it requires a Fourier transform as long as the data-record undergoing processing, and the convolution can be expensive. However, the expense of the convolution can be mitigated by using rectangular g(f).

In this post, we introduce the time-smoothing method (TSM) of SCF estimation. Instead of averaging (smoothing) the cyclic periodogram over spectral frequency, multiple cyclic periodograms are averaged over time. When the non-conjugate cycle frequency of zero is used, this method produces an estimate of the power spectral density, and is essentially the Bartlett spectrum estimation method. The TSM can be found in My Papers [6] (Eq. (54)), and other places in the literature.

Continue reading “CSP Estimators: The Time Smoothing Method”