All BPSK Signals

This is another post about machine learning (ML) and modulation recognition (MR). Previously we looked at the basic idea of MR and why it is a difficult signal-processing problem to solve. We also looked at several papers in the engineering literature that apply neural-network-based ML processing to the MR problem. Finally, I posted a large simulated communication-signal dataset to the CSP Blog as a challenge to the Machine Learners, together with a corresponding set of processing results I obtained by applying non-machine-learning CSP-based MR and parameter-estimation algorithms to the posted data set.

In this post, I want to point out the kinds of data sets that are used in various modulation-recognition ML papers and ask questions about their fidelity, appropriateness, and utility. I’ve been especially puzzled to read the common refrain about how ML algorithms produce performance better than “conventional methods.” Rarely are these conventional methods described in any detail, and when details are provided they are garbled or insufficient.  What is most curious, though, is that the training and testing data sets used in the ML MR papers are narrow in scope, yet conventional methods of MR are often not narrow in scope. If the ML MR algorithm is trained and tested on a set of modulated signals all having symbol rate 40 kHz (this happens; keep reading), what is the corresponding conventional MR method with which to compare? Is it a generic method that is provided prior information on the known rate? Or is it some highly specialized conventional method that is derived or built from the ground up using that prior information about the symbol rate? What are they talking about when they claim superiority to the conventional method in such cases?

So I think there is a sort of gap between what the Machine Learners think of as modulation recognition and what the “conventional” researchers and practitioners mean. When I say I can recognize a BPSK signal, I mean I can recognize all BPSK signals.

Continue reading “All BPSK Signals”

Professor Jang Again Tortures CSP Mathematics Until it Breaks

We first met Professor Jang in a “Comments on the Literature” type of post from 2016. In that post, I pointed out fundamental mathematical errors contained in a paper the Professor published in the IEEE Communications Letters in 2014 (The Literature [R71]).

I have just noticed a new paper by Professor Jang, published in the journal IEEE Access, which is a peer-reviewed journal, like the Communications Letters. This new paper is titled “Simultaneous Power Harvesting and Cyclostationary Spectrum Sensing in Cognitive Radios” (The Literature [R144]). Many of the same errors are present in this paper. In fact, the beginning of the paper, and the exposition on cyclostationary signal processing is nearly the same as in The Literature [R71].

Let’s take a look.

Continue reading “Professor Jang Again Tortures CSP Mathematics Until it Breaks”

Symmetries of Second-Order Probabilistic Parameters in CSP

As you progress through the various stages of learning CSP (intimidation, frustration, elucidation, puzzlement, and finally smooth operation), the symmetries of the various functions come up over and over again. Exploiting symmetries can result in lower computational costs, quicker debugging, and easier mathematical development.

What exactly do we mean by ‘symmetries of parameters?’ I’m talking primarily about the evenness or oddness of the time-domain functions in the delay \tau and cycle frequency \alpha variables and of the frequency-domain functions in the spectral frequency f and cycle frequency \alpha variables. Or a generalized version of evenness/oddness, such as f(-x) = g(x), where f(x) and g(x) are closely related functions. We have to consider the non-conjugate and conjugate functions separately, and we’ll also consider both the auto and cross versions of the parameters. We’ll look at higher-order cyclic moments and cumulants in a future post.

You can use this post as a resource for mathematical development because I present the symmetry equations. But also each symmetry result is illustrated using estimated parameters via the frequency smoothing method (FSM) of spectral correlation function estimation. The time-domain parameters are obtained from the inverse transforms of the FSM parameters. So you can also use this post as an extension of the second-order verification guide to ensure that your estimator works for a wide variety of input parameters.

Continue reading “Symmetries of Second-Order Probabilistic Parameters in CSP”

On Impulsive Noise, CSP, and Correntropy

I’ve seen several published and pre-published (arXiv.org) technical papers over the past couple of years on the topic of cyclic correntropy (The Literature [R123-R127]). I first criticized such a paper ([R123]) here, but the substance of that review was about my problems with the presented mathematics, not impulsive noise and its effects on CSP. Since the papers keep coming, apparently, I’m going to put down some thoughts on impulsive noise and some evidence regarding simple means of mitigation in the context of CSP. Preview: I don’t think we need to go to the trouble of investigating cyclic correntropy as a means of salvaging CSP from the clutches of impulsive noise.

Continue reading “On Impulsive Noise, CSP, and Correntropy”

For the Beginner at CSP

Here is a list of links to CSP Blog posts that I think are suitable for a beginner: read them in the order given.

How to Obtain Help from the CSP Blog

Introduction to CSP

How to Create a Simple Cyclostationary Signal: Rectangular-Pulse BPSK

The Cyclic Autocorrelation Function

The Spectral Correlation Function

The Cyclic Autocorrelation for BPSK

Continue reading “For the Beginner at CSP”

Simple Synchronization Using CSP

In this post I discuss the use of cyclostationary signal processing applied to communication-signal synchronization problems. First, just what are synchronization problems? Synchronize and synchronization have multiple meanings, but the meaning of synchronize that is relevant here is something like:

syn·chro·nize: To cause to occur or operate with exact coincidence in time or rate

If we have an analog amplitude-modulated (AM) signal (such as voice AM used in the AM broadcast bands) at a receiver we want to remove the effects of the carrier sine wave, resulting in an output that is only the original voice or music message. If we have a digital signal such as binary phase-shift keying (BPSK), we want to remove the effects of the carrier but also sample the message signal at the correct instants to optimally recover the transmitted bit sequence. 

Continue reading “Simple Synchronization Using CSP”

MATLAB’s SSCA: commP25ssca.m

In this short post, I describe some errors that are produced by MATLAB’s strip spectral correlation analyzer function commP25ssca.m. I don’t recommend that you use it; far better to create your own function.

Continue reading “MATLAB’s SSCA: commP25ssca.m”

Comments on “Detection of Almost-Cyclostationarity: An Approach Based on a Multiple Hypothesis Test” by S. Horstmann et al

I recently came across the conference paper in the post title (The Literature [R101]). Let’s take a look.

The paper is concerned with “detect[ing] the presence of ACS signals with unknown cycle period.” In other words, blind cyclostationary-signal detection and cycle-frequency estimation. Of particular importance to the authors is the case in which the “period of cyclostationarity” is not equal to an integer number of samples. They seem to think this is a new and difficult problem. By my lights, it isn’t. But maybe I’m missing something. Let me know in the Comments.

Continue reading “Comments on “Detection of Almost-Cyclostationarity: An Approach Based on a Multiple Hypothesis Test” by S. Horstmann et al”

CSP Estimators: The FFT Accumulation Method

Let’s look at another spectral correlation function estimator: the FFT Accumulation Method (FAM). This estimator is in the time-smoothing category, is exhaustive in that it is designed to compute estimates of the spectral correlation function over its entire principal domain, and is efficient, so that it is a competitor to the Strip Spectral Correlation Analyzer (SSCA) method. I implemented my version of the FAM by using the paper by Roberts et al (The Literature [R4]). If you follow the equations closely, you can successfully implement the estimator from that paper. The tricky part, as with the SSCA, is correctly associating the outputs of the coded equations to their proper \displaystyle (f, \alpha) values.

Continue reading “CSP Estimators: The FFT Accumulation Method”

Resolution in Time, Frequency, and Cycle Frequency for CSP Estimators

In this post, we look at the ability of various CSP estimators to distinguish cycle frequencies, temporal changes in cyclostationarity, and spectral features. These abilities are quantified by the resolution properties of CSP estimators.

Resolution Parameters in CSP: Preview

Consider performing some CSP estimation task, such as using the frequency-smoothing method, time-smoothing method, or strip spectral correlation analyzer method of estimating the spectral correlation function. The estimate employs T seconds of data.

Then the temporal resolution \Delta t of the estimate is approximately T, the cycle-frequency resolution \Delta \alpha is about 1/T, and the spectral resolution \Delta f depends strongly on the particular estimator and its parameters. The resolution product \Delta f \Delta t was discussed in this post. The fundamental result for the resolution product is that it must be very much larger than unity in order to obtain an SCF estimate with low variance.

Continue reading “Resolution in Time, Frequency, and Cycle Frequency for CSP Estimators”

Automatic Spectral Segmentation

In this post, I discuss a signal-processing algorithm that has almost nothing to do with cyclostationary signal processing. Almost. The topic is automated spectral segmentation, which I also call band-of-interest (BOI) detection. When attempting to perform automatic radio-frequency scene analysis (RFSA), we may be confronted with a data block that contains multiple signals in a large number of distinct frequency subbands. Moreover, these signals may be turning on an off within the data block. To apply our cyclostationary signal processing tools effectively, we would like to isolate these signals in time and frequency to the greatest extent possible using linear time-invariant filtering (for separating in the frequency dimension) and time-gating (for separating in the time dimension). Then the isolated signal components can be processed serially.

It is very important to remember that even perfect spectral and temporal segmentation will not solve the cochannel-signal problem. It is perfectly possible that an isolated subband will contain more that one cochannel signal.

The basics of my BOI-detection approach are published in a 2007 conference paper (My Papers [32]). I’ll describe this basic approach, illustrate it with examples relevant to RFSA, and also provide a few extensions of interest, including one that relates to cyclostationary signal processing.

Continue reading “Automatic Spectral Segmentation”

Cyclostationarity of Direct-Sequence Spread-Spectrum Signals

In this post we look at direct-sequence spread-spectrum (DSSS) signals, which can be usefully modeled as a kind of PSK signal. DSSS signals are used in a variety of real-world situations, including the familiar CDMA and WCDMA signals, covert signaling, and GPS. My colleague Antonio Napolitano has done some work on a large class of DSSS signals (The Literature [R11, R17, R95]), resulting in formulas for their spectral correlation functions, and I’ve made some remarks about their cyclostationary properties myself here and there (My Papers [16]).

A good thing, from the point of view of modulation recognition, about DSSS signals is that they are easily distinguished from other PSK and QAM signals by their spectral correlation functions. Whereas most PSK/QAM signals have only a single non-conjugate cycle frequency, and no conjugate cycle frequencies, DSSS signals have many non-conjugate cycle frequencies and in some cases also have many conjugate cycle frequencies.

Continue reading “Cyclostationarity of Direct-Sequence Spread-Spectrum Signals”

Comments on “Blind Cyclostationary Spectrum Sensing in Cognitive Radios” by W. M. Jang

I recently came across the 2014 paper in the title of this post. I mentioned it briefly in the post on the periodogram. But I’m going to talk about it a bit more here because this is the kind of thing that makes things a bit harder for people trying to learn about cyclostationarity, which eventually leads to the need for something like the CSP Blog.

The idea behind the paper is that it would be nice to avoid the need for prior knowledge of cycle frequencies when using cycle detectors or the like. If you could just compute the entire spectral correlation function, then collapse it by integrating (summing) over frequency f, then you’d have a one-dimensional function of cycle frequency \alpha and you could then process that function inexpensively to perform detection and classification tasks.

Continue reading “Comments on “Blind Cyclostationary Spectrum Sensing in Cognitive Radios” by W. M. Jang”

The Periodogram

I’ve been reviewing a lot of technical papers lately and I’m noticing that it is becoming common to assert that the limiting form of the periodogram is the power spectral density or that the limiting form of the cyclic periodogram is the spectral correlation function. This isn’t true. These functions do not become, in general, less random (erratic) as the amount of data that is processed increases without limit. On the contrary, they always have large variance. Some form of averaging (temporal or spectral) is needed to permit the periodogram to converge to the power spectrum or the cyclic periodogram to converge to the spectral correlation function (SCF).

In particular, I’ve been seeing things like this:

\displaystyle S_x^\alpha(f) = \lim_{T\rightarrow\infty} \frac{1}{T} X_T(f+\alpha/2) X_T^*(f-\alpha/2), \hfill (1)

where X_T(f+\alpha/2) is the Fourier transform of x(t) on t \in [-T/2, T/2]. In other words, the usual cyclic periodogram we talk about here on the CSP blog. See, for example, The Literature [R71], Equation (3).

Continue reading “The Periodogram”

Signal Processing Operations and CSP

It is often useful to know how a signal processing operation affects the probabilistic parameters of a random signal. For example, if I know the power spectral density (PSD) of some signal x(t), and I filter it using a linear time-invariant transformation with impulse response function h(t), producing the output y(t), then what is the PSD of y(t)? This input-output relationship is well known and quite useful. The relationship is

\displaystyle S_y^0(f) = \left| H(f) \right|^2 S_x^0(f). \hfill (1)

In (1), the function H(f) is the transfer function of the filter, which is the Fourier transform of the impulse-response function h(t).

Because the mathematical models of real-world communication signals can be constructed by subjecting idealized textbook signals to various signal-processing operations, such as filtering, it is of interest to us here at the CSP Blog to know how the spectral correlation function of the output of a signal processor is related to the spectral correlation function for the input. Similarly, we’d like to know such input-output relationships for the cyclic cumulants and the cyclic polyspectra.

Another benefit of knowing these CSP input-output relationships is that they tend to build insight into the meaning of the probabilistic parameters. For example, in the PSD input-output relationship (1), we already know that the transfer function at f = f_0 scales the input frequency component at f_0 by the complex number H(f_0). So it makes sense that the PSD at f_0 is scaled by the squared magnitude of H(f_0). If the filter transfer function is zero at f_0, then the density of averaged power at f_0 should vanish too.

So, let’s look at this kind of relationship for CSP parameters. All of these results can be found, usually with more mathematical detail, in My Papers [6, 13].

Continue reading “Signal Processing Operations and CSP”

The Cycle Detectors

Let’s take a look at a class of signal-presence detectors that exploit cyclostationarity and in doing so illustrate the good things that can happen with CSP whenever cochannel interference is present, or noise models deviate from simple additive white Gaussian noise (AWGN). I’m referring to the cycle detectors, the first CSP algorithms I ever studied.

Continue reading “The Cycle Detectors”

Radio-Frequency Scene Analysis

So why do I obsess over cyclostationary signals and cyclostationary signal processing? What’s the big deal, in the end? In this post I discuss my view of the ultimate use of cyclostationary signal processing (CSP): Radio-Frequency Scene Analysis (RFSA). Eventually, I hope to create a kind of Star Trek Tricorder for RFSA.

Continue reading “Radio-Frequency Scene Analysis”

Square-Root Raised-Cosine PSK/QAM

Let’s look at a somewhat more realistic textbook signal: The PSK/QAM signal with independent and identically distributed symbols (IID) and a square-root raised-cosine (SRRC) pulse function. The SRRC pulse is used in many practical systems and in many theoretical and simulation studies. In this post, we’ll look at how the free parameter of the pulse function, called the roll-off parameter or excess bandwidth parameter, affects the power spectrum and the spectral correlation function.

Continue reading “Square-Root Raised-Cosine PSK/QAM”

CSP Estimators: The Strip Spectral Correlation Analyzer

In this post I present a very useful blind cycle-frequency estimator known in the literature as the strip spectral correlation analyzer (SSCA) (The Literature [R3-R5]). We’ve covered the basics of the frequency-smoothing method (FSM) and the time-smoothing method (TSM) of estimating the spectral correlation function (SCF) in previous posts. The TSM and FSM are efficient estimators of the SCF when it is desired to estimate it for one or a few cycle frequencies (CFs). The SSCA, on the other hand, is efficient when we want to estimate the SCF for all CFs.

See also an alternate method of exhaustive SCF estimation: The FFT Accumulation Method.

Continue reading “CSP Estimators: The Strip Spectral Correlation Analyzer”

A Gallery of Spectral Correlation

In this post I provide plots of the spectral correlation for a variety of simulated textbook signals and several collected communication signals. The plots show the variety of cycle-frequency patterns that arise from the disparate approaches to digital communication signaling. The distinguishability of these patterns, combined with the inability to distinguish based on the power spectrum, leads to a powerful set of classification (modulation recognition) features (My Papers [16, 25, 26, 28]).

In all cases, the cycle frequencies are blindly estimated by the strip spectral correlation analyzer (The Literature [R3, R4]) and the estimates used by the FSM to compute the spectral correlation function. MATLAB is then used to plot the magnitude of the spectral correlation and conjugate spectral correlation, as specified by the determined non-conjugate and conjugate cycle frequencies.

There are three categories of signal types in this gallery: textbook signals, collected signals, and feature-rich signals. The latter comprises some collected signals (e.g., LTE) and some simulated radar signals. For the first two signal categories, the three-dimensional surface plots I’ve been using will suffice for illustrating the cycle-frequency patterns and the behavior of the spectral correlation function over frequency. But for the last category, the number of cycle frequencies is so large that the three-dimensional surface is difficult to interpret–it is a visual mess. For these signals, I’ll plot the maximum spectral correlation magnitude over spectral frequency f versus the detected cycle frequency \alpha (as in this post).

A complementary gallery of cyclic autocorrelation functions can be found here.

Continue reading “A Gallery of Spectral Correlation”