Watch Out!

“Hear my words and bear witness to my vow:

Night gathers, and now my watch begins. It shall not end until my death. I am the fire that burns against the cold, the light that brings the dawn, the horn that wakes the sleepers, the shield that guards the realms of men.”

Night’s Watch Oath, Game of Thrones, by G. R. R. Martin

Due to my pretention to academic worthiness, I have Google Scholar alert me to all my new citations. That is, citations to My Papers. I got an anodyne alert the other day and, as usual, gave it a quick once-over. Anything new or interesting? Any new signal-processing twist or a machine-learning breakthrough, finally smashing the last vestiges of the old order? I’m referring here to The Literature [R205].

Well … no. But some modern AI-related weirdness is there, and it is a concerning variety of weirdness for researchers that attempt to learn from published technical work, and especially for those that attempt to use references in a published technical paper to dig a little deeper toward foundational material. Let’s take a look.

Continue reading “Watch Out!”

Tide is Turning?

I see the first CSP Blog upturn since ChatGPT exploded in late 2022. But is it real?

The CSP Blog enjoyed year-over-year page-view and viewer-total increases from its beginning in 2015 to 2022. In 2023, page views and viewer totals fell from their highs in 2022. They fell further in 2024. But the trend has reversed itself here in 2025:

Great!

Continue reading “Tide is Turning?”

The CSP Blog Turns 10

Raise a glass!

I launched the site way back in September 2015. As with most things in my life, the CSP Blog was not the result of some carefully crafted plan, such as to corner the online market on signal-processing instruction, create a side-hustle, or manage my brand, whatever that might mean. It was a lark. I wanted my wife to start a blog or website as a place to share her writing with the world. “Look, dear, it really is super easy to create your own website,” I said to her, after writing a post or two. And it really is easy.

But then the CSP Blog took on a life of its own. Or, better said, it took over my life. Certainly it took a lot of my time, and still does.

Continue reading “The CSP Blog Turns 10”

The Big Time

“The place where I come from is a small town
They think so small, they use small words
But not me, I’m smarter than that
I worked it out
I’ve been stretching my mouth
To let those big words come right out”

‘Big Time’ by Peter Gabriel

The CSP Blog is now linked-to at the top of cyclostationarity.com, Professor Gardner’s online repository of all things cyclostationary! (See also The Literature [R1].)

Continue reading “The Big Time”

CSP Reduction to Sine-Wave Generation

“Five different voices behind him bellowed, “REDUCTO!” Five curses flew in five different directions and the shelves opposite them exploded as they hit; the towering structure swayed as a hundred glass spheres burst apart, pearly-white figures unfurled into the air and floated there, their voices echoing from who knew what long-dead past amid the torrent of crashing glass and splintered wood now raining down upon the floor…”

J. K. Rowling, Harry Potter and the Order of the Phoenix

We know that if we subject a cyclostationary signal to a squaring or delay-and-multiply operation we will obtain finite-strength additive sine-wave components at the output of the operation, where at least one of the sine waves has a non-zero frequency.

But I want to make a conjecture: All of CSP can be reduced to interpretations involving sine-wave generation by nonlinear operations. Let’s see if we can show this conjecture is true. After I make my attempt, I’ll also show what ChatGPT comes up with. Any guesses about how well it does?

Continue reading “CSP Reduction to Sine-Wave Generation”

CSPB.ML.2018R2.NF

A noise-free version of the 2018 CSP Blog dataset CSPB.ML.2018R2 is posted here. This allows researchers to correctly apply propagation-channel effects to the generated signals, and to easily add their own noise at whatever level they wish.

The format of the files is the same as CSPB.ML.2018R2, and the truth parameters for each file are the same as the truth parameters for the corresponding file in CSPB.ML.2018R2, except for SNR, which is infinite.

Continue reading “CSPB.ML.2018R2.NF”

End-of-Year Blog Notes 2024

Hey … I … Oh-oh … I’m still alive… (Alive by Pearl Jam)

Hey everybody! I’m still here.

I want to wish all of you a Happy New Year! I hope all your signal-processing projects succeed and your math skills grow steadily in 2025.

“May your noise be additive and white, and may all your SCFs be right.”

Continue reading “End-of-Year Blog Notes 2024”

Interference Mitigation Course at GTRI

Update December 2024: The likely date for this course at GTRI is February 4-5, 2025.

Update September 2024: This course is postponed until Spring 2025. I’ll post further updates here as they become available.


I’ll be part of a team of researchers and practicing engineers, led by the estimable Dr. Ryan Westafer, that will be teaching a class on radio-frequency interference mitigation in September. The class is hosted by the Georgia Tech Research Institute (GTRI) and will be held on the Georgia Tech campus on September 10-11, 2024.

Continue reading “Interference Mitigation Course at GTRI”

Final Snoap Doctoral-Work Journal Paper: My Papers [56] on Novel Network Layers for Modulation Recognition that Generalizes

Dr. Snoap’s final journal paper related to his recently completed doctoral work has been published in IEEE Transactions on Broadcasting (My Papers [56]).

Continue reading “Final Snoap Doctoral-Work Journal Paper: My Papers [56] on Novel Network Layers for Modulation Recognition that Generalizes”

Desultory CSP: What’s That Under the TV?

“Alive in the Superunknown
First it steals your Mind, and then it steals your … Soul”

–Soundgarden

An advantage of using and understanding the statistics of communication signals ™, the basics of signal processing, and the rich details of cyclostationary signal processing is that a practitioner can deal with, to some useful degree, unknown unknowns. The unknown unknowns I’m talking about here on the CSP Blog are, of course, signals. We know about the by-now-familiar known-type detection, multi-class modulation-recognition, and RF scene-analysis problems, in which it is often assumed that we know the signals we are looking for, but we don’t know their times of arrival, some of their parameters, or how they might overlap in time, frequency, and space. Then there are the less-familiar problems involving unknown unknowns.

Sometimes we just don’t know the signals we are looking for. We still want to do as good a job on RF scene analysis as we can, but there might be signals in the scene that do not conform to the body of knowledge we have, to date, of manmade RF signals. Or, in modern parlance, we didn’t even know we left such signals out of our neural-network training dataset; we’re a couple steps back from even worrying about generalization, because we don’t even know we can’t generalize since we are ignorant about what to generalize to.

In this post I look at the broadcast TV band, seen in downtown Monterey, California, sometime in the recent past. I expect to see ATSC DTV signals (of the older 8VSB/16VSB or the newer OFDM types), and I do. But what else is there? Spoiler: Unknown unknowns.

Let’s take a look.

Continue reading “Desultory CSP: What’s That Under the TV?”

CSPB.ML.2023G1

Another dataset aimed at the continuing problem of generalization in machine-learning-based modulation recognition. This one is a companion to CSPB.ML.2023, which features cochannel situations.

Quality datasets containing digital signals with varied parameters and lengths sufficient to permit many kinds of validation checks by signal-processing experts remain in short supply. In this post, we continue our efforts to provide such datasets by offering a companion unlabeled dataset to CSPB.ML.2023.

Continue reading “CSPB.ML.2023G1”

Introducing Dr. John A. Snoap

An expert signal processor. An expert machine learner. All in one person!

I am very pleased to announce that my signal-processing, machine-learning, and modulation-recognition collaborator and friend John Snoap has successfully defended his doctoral dissertation and is now Dr. Snoap!

I started working with John after we met in the Comments section of the CSP Blog way back in 2019. John was building his own set of CSP software tools and ran into a small bump in the road and asked for some advice. Just the kind of reader I hope for–independent-minded, gets to the bottom of things, and embraces signal processing.

As we interacted over email and zoom it became clear that John was thinking of making a contribution in the area of modulation recognition, and was also interested in learning more about machine learning using neural networks. Since I had been recently engaged in hand-to-hand combat with machine learners who were, in my opinion of course, injecting more confusion than elucidation into the field, I figured this might be a friendly way for me to understand machine learning better, and maybe there would be a way or two to marry signal processing with supervised learning. So off we went.

Fast forward four years and we’ve published five papers, with a sixth in review, that I believe are trailblazing. John is that rare person that has mastered two very different technical areas: cyclostationary signal processing and deep learning. Because I believe that neural networks do not actually learn the things that we hope they will, but need not-so-gentle nudges toward learning the truly valuable things, a researcher with one foot firmly in the signal-processing world and the other firmly in the machine-learning world has a very bright future indeed.

The title of John’s dissertation is Deep-Learning-Based Classification of Digitally Modulated Signals, which he wrote as a student in the Department of Electrical and Computer Engineering at Old Dominion University under the direction of his advisor Professor Dimitrie Popescu.

Congratulations Dr. Snoap! And thank you for everything.

CSPB.ML.2022R2: Correcting an RNG Flaw in CSPB.ML.2022

For completeness, I also correct the CSPB.ML.2022 dataset, which is aimed at facilitating neural-network generalization studies.

The same random-number-generator (RNG) error that plagued CSPB.ML.2018 corrupts CSPB.ML.2022, so that some of the files in the dataset correspond to identical signal parameters. This makes the CSPB.ML.2018 dataset potentially problematic for training a neural network using supervised learning.

In a recent post, I remedied the error and provided an updated CSPB.ML.2018 dataset and called it CSPB.ML.2018R2. Both are still available on the CSP Blog.

In this post, I provide an update to CSPB.ML.2022, called CSPB.ML.2022R2.

Continue reading “CSPB.ML.2022R2: Correcting an RNG Flaw in CSPB.ML.2022”

CSPB.ML.2018R2: Correcting an RNG Flaw in CSPB.ML.2018

KIRK: Everything that is in error must be sterilised.
NOMAD: There are no exceptions.
KIRK: Nomad, I made an error in creating you.
NOMAD: The creation of perfection is no error.
KIRK: I did not create perfection. I created error.

I’ve had to update the original Challenge for the Machine Learners post, and the associated dataset post, a couple times due to flaws in my metadata (truth) files. Those were fairly minor, so I just updated the original posts.

But a new flaw in CSPB.ML.2018 and CSPB.ML.2022 has come to light due to the work of the estimable research engineers at Expedition Technology. The problem is not with labeling or the fundamental correctness of the modulation types, pulse functions, etc., but with the way a random-number generator was applied in my multi-threaded dataset-generation technique.

I’ll explain after the fold, and this post will provide links to an updated version of the dataset, CSPB.ML.2018R2. I’ll keep the original up for continuity and also place a link to this post there. Moreover, the descriptions of the truth files over at CSPB.ML.2018 are still valid–the truth file posted here has the same format as the truth files available on the CSPB.ML.2018 and CSPB.ML.2022 posts.

Continue reading “CSPB.ML.2018R2: Correcting an RNG Flaw in CSPB.ML.2018”

The Next Logical Step in CSP+ML for Modulation Recognition: Snoap’s MILCOM ’23 Paper [Preview]

We are attempting to force a neural network to learn the features that we have already shown deliver simultaneous good performance and good generalization.

ODU doctoral student John Snoap and I have a new paper on the convergence of cyclostationary signal processing, machine learning using trained neural networks, and RF modulation classification: My Papers [55] (arxiv.org link here).

Previously in My Papers [50-52, 54] we have shown that the (multitudinous!) neural networks in the literature that use I/Q data as input and perform modulation recognition (output a modulation-class label) are highly brittle. That is, they minimize the classification error, they converge, but they don’t generalize. A trained neural network generalizes well if it can maintain high classification performance even if some of the probability density functions for the data’s random variables differ from the training inputs (in the lab) relative to the application inputs (in the field). The problem is also called the dataset-shift problem or the domain-adaptation problem. Generalization is my preferred term because it is simpler and has a strong connection to the human equivalent: we can quite easily generalize our observations and conclusions from one dataset to another without massive retraining of our neural noggins. We can find the cat in the image even if it is upside-down and colored like a giraffe.

Continue reading “The Next Logical Step in CSP+ML for Modulation Recognition: Snoap’s MILCOM ’23 Paper [Preview]”

A Gallery of Cyclic Cumulants

The third in a series of posts on visualizing the multidimensional functions characterizing the fundamental statistics of communication signals.

Let’s continue our progression of galleries showing plots of the statistics of communication signals. So far we have provided a gallery of spectral correlation surfaces and a gallery of cyclic autocorrelation surfaces. Here we introduce a gallery of cyclic-cumulant matrices.

When we look at the spectral correlation or cyclic autocorrelation surfaces for a variety of communication signal types, we learn that the cycle-frequency patterns exhibited by modulated signals are many and varied, and we get a feeling for how those variations look (see also the Desultory CSP posts). Nevertheless, there are large equivalence classes in terms of spectral correlation. That simply means that a large number of distinct modulation types map to the exact same second-order statistics, and therefore to the exact same spectral correlation and cyclic autocorrelation surfaces. The gallery of cyclic cumulants will reveal, in an easy-to-view way, that many of these equivalence classes are removed once we consider, jointly, both second- and higher-order statistics.

Continue reading “A Gallery of Cyclic Cumulants”

CSP Blog Interview: Why We Still Need Human Signal Processors with Engineers E. Akamai and D. Peritum

What do practicing engineers think of using large-language models like ChatGPT in their research, development, and writing tasks? And is there a future for humans in signal processing?

Let’s switch things up a bit here at the CSP Blog by presenting an interview on a technical topic. I interview two characters you might recall from the post on the Domain Expertise Trap: Engineers Dan Peritum and Eunice Akamai.

With the splashy entrance of large-language models like ChatGPT into everyday life and into virtually all aspects of science, engineering, and education, we all want to know how our jobs and careers could be affected by widespread use of artificial intelligence constructs like ChatGPT, Dall-E, and Midjourney. In this interview with a couple of my favorite engineers, I get a feel for how non-AI researchers and developers think about the coming changes, and of course how they view the hype, distortions, and fabrications surrounding predictions of those changes. You can find photos of the interviewees and brief biographies at the end of the post.

The interview transcript is carefully contrived lightly edited for believability clarity.

Continue reading “CSP Blog Interview: Why We Still Need Human Signal Processors with Engineers E. Akamai and D. Peritum”

Simply Avert Your Eyes

Everything is just fine.

The IEEE sent me their annual report for 2022. I was wondering how they were responding to the poor quality of many of their published papers, including faked papers and various paper retractions. Let’s take a quick look.

Continue reading “Simply Avert Your Eyes”

Latest Paper on CSP and Deep-Learning for Modulation Recognition: An Extended Version of My Papers [52]

Another step forward in the merging of CSP and ML for modulation recognition, and another step away from the misstep of always relying on convolutional neural networks from image processing for RF-domain problem-solving.

My Old Dominion colleagues and I have published an extended version of the 2022 MILCOM paper My Papers [52] in the journal MDPI Sensors. The first author is John Snoap, who is one of those rare people that is an expert in signal processing and in machine learning. Bright future there! Dimitrie Popescu, James Latshaw, and I provided analysis, programming, writing, and research-direction support.

Continue reading “Latest Paper on CSP and Deep-Learning for Modulation Recognition: An Extended Version of My Papers [52]”