ChatGPT and CSP

Am I out of a job?

Update January 31, 2023: I’ve added numbers in square brackets next to the worst of the wrong things. I’ll document the errors at the bottom of the post.

Of course I have to see what ChatGPT has to say about CSP. Including definitions, which I don’t expect it to get too wrong, and code for estimators, which I expect it to get very wrong.

Let’s take a look.

Continue reading “ChatGPT and CSP”

SPTK: Echo Detection and the Prisoner’s Dilemma

Let’s apply some of our Signal Processing ToolKit tools to a problem in forensic signal processing!

Previous SPTK Post: The Sampling Theorem Next SPTK Post: TBD

No, not that prisoner’s dilemma. The dilemma of a prisoner that claims, steadfastly, innocence. Even in the face of strong evidence and a fair jury trial.

In this Signal Processing ToolKit cul-de-sac of a post, we’ll look into a signal-processing adventure involving a digital sting recording and a claim of evidence tampering. We’ll be able to use some of our SPTK tools to investigate a real-world data record that might, just might, have been tampered with. (But most probably not!)

Continue reading “SPTK: Echo Detection and the Prisoner’s Dilemma”

Correcting the Record: Comments On “Wireless Signal Representation Techniques for Automatic Modulation Classification,” by X. Liu et al

It’s too close to home, and it’s too near the bone …

Park the car at the side of the road
You should know
Time’s tide will smother you…
And I will too

“That Joke Isn’t Funny Anymore” by The Smiths

I applaud the intent behind the paper in this post’s title, which is The Literature [R183], apparently accepted in 2022 for publication in IEEE Access, a peer-reviewed journal. That intent is to list all the found ways in which researchers preprocess radio-frequency data (complex sampled data) prior to applying some sort of modulation classification (recognition) algorithm or system.

The problem is that this attempt at gathering up all of the ‘representations’ gets a lot of the math wrong, and so has a high potential to confuse rather than illuminate.

There’s only one thing to do: correct the record.

Continue reading “Correcting the Record: Comments On “Wireless Signal Representation Techniques for Automatic Modulation Classification,” by X. Liu et al”

Neural Networks for Modulation Recognition: IQ-Input Networks Do Not Generalize, but Cyclic-Cumulant-Input Networks Generalize Very Well

Neural networks with CSP-feature inputs DO generalize in the modulation-recognition problem setting.

In some recently published papers (My Papers [50,51]), my ODU colleagues and I showed that convolutional neural networks and capsule networks do not generalize well when their inputs are complex-valued data samples, commonly referred to as simply IQ samples, or as raw IQ samples by machine learners.(Unclear why the adjective ‘raw’ is often used as it adds nothing to the meaning. If I just say Hey, pass me those IQ samples, would ya?, do you think maybe he means the processed ones? How about raw-I-mean–seriously-man–I-did-not-touch-those-numbers-OK? IQ samples? All-natural vegan unprocessed no-GMO organic IQ samples? Uncooked IQ samples?) Moreover, the capsule networks typically outperform the convolutional networks.

In a new paper (MILCOM 2022: My Papers [52]; arxiv.org version), my colleagues and I continue this line of research by including cyclic cumulants as the inputs to convolutional and capsule networks. We find that capsule networks outperform convolutional networks and that convolutional networks trained on cyclic cumulants outperform convolutional networks trained on IQ samples. We also find that both convolutional and capsule networks trained on cyclic cumulants generalize perfectly well between datasets that have different (disjoint) probability density functions governing their carrier frequency offset parameters.

That is, convolutional networks do better recognition with cyclic cumulants and generalize very well with cyclic cumulants.

So why don’t neural networks ever ‘learn’ cyclic cumulants with IQ data at the input?

The majority of the software and analysis work is performed by the first author, John Snoap, with an assist on capsule networks by James Latshaw. I created the datasets we used (available here on the CSP Blog [see below]) and helped with the blind parameter estimation. Professor Popescu guided us all and contributed substantially to the writing.

Continue reading “Neural Networks for Modulation Recognition: IQ-Input Networks Do Not Generalize, but Cyclic-Cumulant-Input Networks Generalize Very Well”

Epistemic Bubbles: Comments on “Modulation Recognition Using Signal Enhancement and Multi-Stage Attention Mechanism” by Lin, Zeng, and Gong.

Another brick in the wall, another drop in the bucket, another windmill on the horizon …

Let’s talk more about The Cult. No, I don’t mean She Sells Sanctuary, for which I do have considerable nostalgic fondness. I mean the Cult(ure) of Machine Learning in RF communications and signal processing. Or perhaps it is more of an epistemic bubble where there are The Things That Must Be Said and The Unmentionables in every paper and a style of research that is strictly adhered to but that, sadly, produces mostly error and promotes mostly hype. So we have shibboleths, taboos, and norms to deal with inside the bubble.

Time to get on my high horse. She’s a good horse named Ravager and she needs some exercise. So I’m going to strap on my claymore, mount Ravager, and go for a ride. Or am I merely tilting at windmills?

Let’s take a close look at another paper on machine learning for modulation recognition. It uses, uncritically, the DeepSig RML 2016 datasets. And the world and the world, the world drags me down…

Continue reading “Epistemic Bubbles: Comments on “Modulation Recognition Using Signal Enhancement and Multi-Stage Attention Mechanism” by Lin, Zeng, and Gong.”

‘Comment of the Month’ on the CSP Blog

Introducing swag for the best CSP-Blog commenters.

Update January 2023: You can find the list of winners on this page.


The comments that CSP Blog readers have made over the past six years are arguably the most helpful part of the Blog for do-it-yourself CSP practitioners. In those comments, my many errors have been revealed, which then has permitted me to attempt post corrections. Many unclear aspects of a post have been clarified after pondering a reader’s comment. At least one comment has been elevated to a post of its own.

The readership of the CSP Blog has been steadily growing since its inception in 2015, but the ratio of page views to comments remains huge–the vast majority of readers do not comment. This is understandable and perfectly acceptable. I rarely comment on any of the science and engineering blogs that I frequent. Nevertheless, I would like to encourage more commenting and also reward it.

Continue reading “‘Comment of the Month’ on the CSP Blog”

What is the Minimum Effort Required to Find ‘Related Work?’: Comments on Some Spectrum-Sensing Literature by N. West [R176] and T. Yucek [R178]

Starts as a personal gripe, but ends with weird stuff from the literature.

During my poking around on arxiv.org the other day (Grrrrr…), I came across some postings by O’Shea et al I’d not seen before, including The Literature [R176]: “Wideband Signal Localization and Spectral Segmentation.”

Huh, I thought, they are probably trying to train a neural network to do automatic spectral segmentation that is superior to my published algorithm (My Papers [32]). Yeah, no. I mean yes to a machine, no to nods to me. Let’s take a look.

Continue reading “What is the Minimum Effort Required to Find ‘Related Work?’: Comments on Some Spectrum-Sensing Literature by N. West [R176] and T. Yucek [R178]”

Blog Notes and Preview

May 2022 saw 6026 page views at the CSP Blog, a new monthly record!

Thanks so much to all my readers, new and old, signal processors and machine learners, commenters and lurkers.

My next non-ranty post is on frequency-shift (FRESH) filtering. I will go over cyclic Wiener filtering (The Literature [R6]), which is optimal FRESH filtering, and then describe some interesting puzzles and problems with CW filtering, which may form the seeds of some solid signal-processing research projects of the academic sort.

Elegy for a Dying Field: Comments on “Detection of Direct Sequence Spread Spectrum Signals Based on Deep Learning,” by F. Wei et al

Black-box thinking is degrading our ability to connect effects to causes.

I’m learning, slowly because I’m stubborn and (I know it is hard to believe) optimistic, that there is no bottom. Signal processing and communications theory and practice are being steadily degraded in the world’s best (and worst of course) peer-reviewed journals.

I saw the accepted paper in the post title (The Literature [R177]) and thought this could be better than most of the machine-learning modulation-recognition papers I’ve reviewed. It takes a little more effort to properly understand and generate direct-sequence spread-spectrum (DSSS) signals, and the authors will likely focus on the practical case where the inband SNR is low. Plus there are lots of connections to CSP. But no. Let’s take a look.

Continue reading “Elegy for a Dying Field: Comments on “Detection of Direct Sequence Spread Spectrum Signals Based on Deep Learning,” by F. Wei et al”

Some Concrete Results on Generalization in Modulation Recognition using Machine Learning

Neural networks with I/Q data as input do not generalize in the modulation-recognition problem setting.

Update May 20, 2022: Here is the arxiv.org link.

Back in 2018 I posted a dataset consisting of 112,000 I/Q data files, 32,768 samples in length each, as a part of a challenge to machine learners who had been making strong claims of superiority over signal processing in the area of automatic modulation recognition. One part of the challenge was modulation recognition involving eight digital modulation types, and the other was estimating the carrier frequency offset. That dataset is described here, and I’d like to refer to it as CSPB.ML.2018.

Then in 2022 I posted a companion dataset to CSPB.ML.2018 called CSPB.ML.2022. This new dataset uses the same eight modulation types, similar ranges of SNR, pulse type, and symbol rate, but the random variable that governs the carrier frequency offset is different with respect to the random variable in CSPB.ML.2018. The purpose of the CSPB.ML.2022 dataset is to facilitate studies of the dataset-shift, or generalization, problem in machine learning.

Throughout the past couple of years I’ve been working with some graduate students and a professor at Old Dominion University on merging machine learning and signal processing for problems involving RF signal analysis, such as modulation recognition. We are starting to publish a sequence of papers that describe our efforts. I briefly describe the results of one such paper, My Papers [51], in this post.

Continue reading “Some Concrete Results on Generalization in Modulation Recognition using Machine Learning”

A Great American Science Writer: Lee Smolin

While reading a book on string theory for lay readers, I did a double take…

I don’t know why I haven’t read any of Lee Smolin’s physics books prior to this year, but I haven’t. Maybe blame my obsession with Sean Carroll. In any case, I’ve been reading The Trouble with Physics (The Literature [R175]), which is about string theory and string theorists. Smolin finds it troubling that the string theorist subculture in physics shows some signs of groupthink and authoritarianism. Perhaps elder worship too.

I came across this list of attributes, conceived by Smolin, of the ‘sociology’ of the string-theorist contingent:

Continue reading “A Great American Science Writer: Lee Smolin”

The Domain Expertise Trap

The softwarization of engineering continues apace…

I keep seeing people write things like “a major disadvantage of the technique for X is that it requires substantial domain expertise.” Let’s look at a recent good paper that makes many such remarks and try to understand what it could mean, and if having or getting domain expertise is actually a bad thing. Spoiler: It isn’t.

The paper under the spotlight is The Literature [R174], “Interference Suppression Using Deep Learning: Current Approaches and Open Challenges,” published for the nonce on arxiv.org. I’m not calling this post a “Comments On …” post, because once I extract the (many) quotes about domain expertise, I’m leaving the paper alone. The paper is a good paper and I expect it to be especially useful for current graduate students looking to make a contribution in the technical area where machine learning and RF signal processing overlap. I especially like Figure 1 and the various Tables.

Continue reading “The Domain Expertise Trap”

Wow, Elsevier, Just … Wow. Comments On “Cyclic Correntropy: Properties and the Application in Symbol Rate Estimation Under Alpha-Stable Distributed Noise,” by S. Luan et al.

Can we fix peer review in engineering by some form of payment to reviewers?

Let’s talk about another paper about cyclostationarity and correntropy. I’ve critically reviewed two previously, which you can find here and here. When you look at the correntropy as applied to a cyclostationary signal, you get something called cyclic correntropy, which is not particularly useful except if you don’t understand regular cyclostationarity and some aspects of garden-variety signal processing. Then it looks great.

But this isn’t a post that primarily takes the authors of a paper to task, although it does do that. I want to tell the tale to get us thinking about what ‘peer’ could mean, these days, in ‘peer-reviewed paper.’ How do we get the best peers to review our papers?

Let’s take a look at The Literature [R173].

Continue reading “Wow, Elsevier, Just … Wow. Comments On “Cyclic Correntropy: Properties and the Application in Symbol Rate Estimation Under Alpha-Stable Distributed Noise,” by S. Luan et al.”

SPTK: Sampling and The Sampling Theorem

The basics of how to convert a continuous-time signal into a discrete-time signal without losing information in the process. Plus, how the choice of sampling rate influences CSP.

Previous SPTK Post: Random Processes Next SPTK Post: Echo Detection

In this Signal Processing ToolKit post we take a close look at the basic sampling theorem used daily by signal-processing engineers. Application of the sampling theorem is a way to choose a sampling rate for converting an analog continuous-time signal to a digital discrete-time signal. The former is ubiquitous in the physical world–for example all the radio-frequency signals whizzing around in the air and through your body right now. The latter is ubiquitous in the computing-device world–for example all those digital-audio files on your Discman Itunes Ipod DVD Smartphone Cloud Neuralink Singularity.

So how are those physical real-world analog signals converted to convenient lists of finite-precision numbers that we can apply arithmetic to? For that’s all [digital or cyclostationary] signal processing is at bottom: arithmetic. You might know the basic rule-of-thumb for choosing a sampling rate: Make sure it is at least twice as big as the largest frequency component in the analog signal undergoing the sampling. But why, exactly, and what does ‘largest frequency component’ mean?

Continue reading “SPTK: Sampling and The Sampling Theorem”

Update on J. Antoni’s Fast Spectral Correlation Estimator

Let’s take a look at an even faster spectral correlation function estimator. How useful is it for CSP applications in communications?

Reader Gideon pointed out that Antoni had published a paper a year after the paper that I considered in my first Antoni post. This newer paper, The Literature [R172], promises a faster fast spectral correlation estimator, and it delivers on that according to the analysis in the paper. However, I think the faster fast spectral correlation estimator is just as limited as the slower fast spectral correlation estimator when considered in the context of communication-signal processing.

And, to be fair, Antoni doesn’t often consider the context of communication-signal processing. His favored application is fault detection in mechanical systems with rotating parts. But I still don’t think the way he compares his fast and faster estimators to conventional estimators is fair. The reason is that his estimators are both severely limited in the maximum cycle frequency that can be processed, relative to the maximum cycle frequency that is possible.

Let’s take a look.

Continue reading “Update on J. Antoni’s Fast Spectral Correlation Estimator”

One Last Time …

We take a quick look at a fourth DeepSig dataset called 2016.04C.multisnr.tar.bz2 in the context of the data-shift problem in machine learning.

And if we get this right,

We’re gonna teach ’em how to say

Goodbye …

You and I.

Lin-Manuel Miranda, “One Last Time,” Hamilton

I didn’t expect to have to do this, but I am going to analyze yet another DeepSig dataset. One last time. This one is called 2016.04C.multisnr.tar.bz2, and is described thusly on the DeepSig website:

Figure 1. Description of various DeepSig data sets found on the DeepSig website as of November 2021.

I’ve analyzed the 2018 dataset here, the RML2016.10b.tar.bz2 dataset here, and the RML2016.10a.tar.bz2 dataset here.

Now I’ve come across a manuscript-in-review in which both the RML2016.10a and RML2016.04c data sets are used. The idea is that these two datasets represent two sufficiently distinct datasets so that they are good candidates for use in a data-shift study involving trained neural-network modulation-recognition systems.

The data-shift problem is, as one researcher puts it:

Data shift or data drift, concept shift, changing environments, data fractures are all similar terms that describe the same phenomenon: the different distribution of data between train and test sets

Georgios Sarantitis

But … are they really all that different?

Continue reading “One Last Time …”

Comments on “Proper Definition and Handling of Dirac Delta Functions” by C. Candan.

An interesting paper on the true nature of the impulse function we use so much in signal processing.

The impulse function, also called the Dirac delta function, is commonly used in statistical signal processing, and on the CSP Blog (examples: representations and transforms). I think we’re a bit casual about this usage, and perhaps none of us understand impulses as well as we might.

Enter C. Candan and The Literature [R155].

Continue reading “Comments on “Proper Definition and Handling of Dirac Delta Functions” by C. Candan.”

The Principal Domain for the Spectral Correlation Function

What are the ranges of spectral frequency and cycle frequency that we need to consider in a discrete-time/discrete-frequency setting for CSP?

Let’s talk about that diamond-shaped region in the (f, \alpha) plane we so often see associated with CSP. I’m talking about the principal domain for the discrete-time/discrete-frequency spectral correlation function. Where does it come from? Why do we care? When does it come up?

Continue reading “The Principal Domain for the Spectral Correlation Function”

SPTK: The Analytic Signal and Complex Envelope

In signal processing, and in CSP, we often have to convert real-valued data into complex-valued data and vice versa. Real-valued data is in the real world, but complex-valued data is easier to process due to the use of a substantially lower sampling rate.

Previous SPTK Post: The Moving-Average Filter    Next SPTK Post: Random Variables

In this Signal-Processing Toolkit post, we review the signal-processing steps needed to convert a real-valued sampled-data bandpass signal to a complex-valued sampled-data lowpass signal. The former can arise from sampling a signal that has been downconverted from its radio-frequency spectral band to a much lower intermediate-frequency spectral band. So we want to convert such data to complex samples at zero frequency (‘complex baseband’) so we can decimate them and thereby match the sample rate to the signal’s baseband bandwidth. Subsequent signal-processing algorithms (including CSP of course) can then operate on the relatively low-rate complex-envelope data, which is beneficial because the same number of seconds of data can be processed using fewer samples, and computational cost is determined by the number of samples, not the number of seconds.

Continue reading “SPTK: The Analytic Signal and Complex Envelope”

SPTK: The Moving-Average Filter

A simple and useful example of a linear time-invariant system. Good for smoothing and discovering trends by averaging away noise.

Previous SPTK Post: Ideal Filters             Next SPTK Post: The Complex Envelope

We continue our basic signal-processing posts with one on the moving-average, or smoothing, filter. The moving-average filter is a linear time-invariant operation that is widely used to mitigate the effects of additive noise and other random disturbances from a presumably well-behaved signal. For example, a physical phenomenon may be producing a signal that increases monotonically over time, but our measurement of that signal is corrupted by noise, interference, or flaws in the measurement process. The moving-average filter can reveal the sought-after trend by suppressing the effects of the unwanted disturbances.

Continue reading “SPTK: The Moving-Average Filter”