CSPB.ML.2018R2.NF

A noise-free version of the 2018 CSP Blog dataset CSPB.ML.2018R2 is posted here. This allows researchers to correctly apply propagation-channel effects to the generated signals, and to easily add their own noise at whatever level they wish.

The format of the files is the same as CSPB.ML.2018R2, and the truth parameters for each file are the same as the truth parameters for the corresponding file in CSPB.ML.2018R2, except for SNR, which is infinite.

Continue reading “CSPB.ML.2018R2.NF”

Final Snoap Doctoral-Work Journal Paper: My Papers [56] on Novel Network Layers for Modulation Recognition that Generalizes

Dr. Snoap’s final journal paper related to his recently completed doctoral work has been published in IEEE Transactions on Broadcasting (My Papers [56]).

Continue reading “Final Snoap Doctoral-Work Journal Paper: My Papers [56] on Novel Network Layers for Modulation Recognition that Generalizes”

CSPB.ML.2023G1

Another dataset aimed at the continuing problem of generalization in machine-learning-based modulation recognition. This one is a companion to CSPB.ML.2023, which features cochannel situations.

Quality datasets containing digital signals with varied parameters and lengths sufficient to permit many kinds of validation checks by signal-processing experts remain in short supply. In this post, we continue our efforts to provide such datasets by offering a companion unlabeled dataset to CSPB.ML.2023.

Continue reading “CSPB.ML.2023G1”

Stupid Laws Getting In My Way

A kvetch.

As the generative-AI crowd continues to feast on copyrighted material of all kinds, they are getting pushback in the form of lawsuits from artists, writers, and journalists. I discussed this recently with Dan and Eunice on the CSP Blog.

Open AI in particular seems to believe they have some kind of divine right to pursue whatever business they want, whether it is legal or not. Because reasons … including national security … and “meeting the needs of today’s citizens.” But probably just greed and hubris.

In a statement to the UK’s House of Lords, Open AI says this, and I assume they did so with a straight face, which would have been admirably difficult:

Continue reading “Stupid Laws Getting In My Way”

CSPB.ML.2022R2: Correcting an RNG Flaw in CSPB.ML.2022

For completeness, I also correct the CSPB.ML.2022 dataset, which is aimed at facilitating neural-network generalization studies.

The same random-number-generator (RNG) error that plagued CSPB.ML.2018 corrupts CSPB.ML.2022, so that some of the files in the dataset correspond to identical signal parameters. This makes the CSPB.ML.2018 dataset potentially problematic for training a neural network using supervised learning.

In a recent post, I remedied the error and provided an updated CSPB.ML.2018 dataset and called it CSPB.ML.2018R2. Both are still available on the CSP Blog.

In this post, I provide an update to CSPB.ML.2022, called CSPB.ML.2022R2.

Continue reading “CSPB.ML.2022R2: Correcting an RNG Flaw in CSPB.ML.2022”

CSPB.ML.2018R2: Correcting an RNG Flaw in CSPB.ML.2018

KIRK: Everything that is in error must be sterilised.
NOMAD: There are no exceptions.
KIRK: Nomad, I made an error in creating you.
NOMAD: The creation of perfection is no error.
KIRK: I did not create perfection. I created error.

I’ve had to update the original Challenge for the Machine Learners post, and the associated dataset post, a couple times due to flaws in my metadata (truth) files. Those were fairly minor, so I just updated the original posts.

But a new flaw in CSPB.ML.2018 and CSPB.ML.2022 has come to light due to the work of the estimable research engineers at Expedition Technology. The problem is not with labeling or the fundamental correctness of the modulation types, pulse functions, etc., but with the way a random-number generator was applied in my multi-threaded dataset-generation technique.

I’ll explain after the fold, and this post will provide links to an updated version of the dataset, CSPB.ML.2018R2. I’ll keep the original up for continuity and also place a link to this post there. Moreover, the descriptions of the truth files over at CSPB.ML.2018 are still valid–the truth file posted here has the same format as the truth files available on the CSPB.ML.2018 and CSPB.ML.2022 posts.

Continue reading “CSPB.ML.2018R2: Correcting an RNG Flaw in CSPB.ML.2018”

The Next Logical Step in CSP+ML for Modulation Recognition: Snoap’s MILCOM ’23 Paper [Preview]

We are attempting to force a neural network to learn the features that we have already shown deliver simultaneous good performance and good generalization.

ODU doctoral student John Snoap and I have a new paper on the convergence of cyclostationary signal processing, machine learning using trained neural networks, and RF modulation classification: My Papers [55] (arxiv.org link here).

Previously in My Papers [50-52, 54] we have shown that the (multitudinous!) neural networks in the literature that use I/Q data as input and perform modulation recognition (output a modulation-class label) are highly brittle. That is, they minimize the classification error, they converge, but they don’t generalize. A trained neural network generalizes well if it can maintain high classification performance even if some of the probability density functions for the data’s random variables differ from the training inputs (in the lab) relative to the application inputs (in the field). The problem is also called the dataset-shift problem or the domain-adaptation problem. Generalization is my preferred term because it is simpler and has a strong connection to the human equivalent: we can quite easily generalize our observations and conclusions from one dataset to another without massive retraining of our neural noggins. We can find the cat in the image even if it is upside-down and colored like a giraffe.

Continue reading “The Next Logical Step in CSP+ML for Modulation Recognition: Snoap’s MILCOM ’23 Paper [Preview]”

CSP Blog Interview: Why We Still Need Human Signal Processors with Engineers E. Akamai and D. Peritum

What do practicing engineers think of using large-language models like ChatGPT in their research, development, and writing tasks? And is there a future for humans in signal processing?

Let’s switch things up a bit here at the CSP Blog by presenting an interview on a technical topic. I interview two characters you might recall from the post on the Domain Expertise Trap: Engineers Dan Peritum and Eunice Akamai.

With the splashy entrance of large-language models like ChatGPT into everyday life and into virtually all aspects of science, engineering, and education, we all want to know how our jobs and careers could be affected by widespread use of artificial intelligence constructs like ChatGPT, Dall-E, and Midjourney. In this interview with a couple of my favorite engineers, I get a feel for how non-AI researchers and developers think about the coming changes, and of course how they view the hype, distortions, and fabrications surrounding predictions of those changes. You can find photos of the interviewees and brief biographies at the end of the post.

The interview transcript is carefully contrived lightly edited for believability clarity.

Continue reading “CSP Blog Interview: Why We Still Need Human Signal Processors with Engineers E. Akamai and D. Peritum”

Latest Paper on CSP and Deep-Learning for Modulation Recognition: An Extended Version of My Papers [52]

Another step forward in the merging of CSP and ML for modulation recognition, and another step away from the misstep of always relying on convolutional neural networks from image processing for RF-domain problem-solving.

My Old Dominion colleagues and I have published an extended version of the 2022 MILCOM paper My Papers [52] in the journal MDPI Sensors. The first author is John Snoap, who is one of those rare people that is an expert in signal processing and in machine learning. Bright future there! Dimitrie Popescu, James Latshaw, and I provided analysis, programming, writing, and research-direction support.

Continue reading “Latest Paper on CSP and Deep-Learning for Modulation Recognition: An Extended Version of My Papers [52]”

Update on DeepSig Datasets

‘Insufficient facts always invite danger.’
Spock in Star Trek TOS Episode “Space Seed”

As most CSP Blog readers likely know, I’ve performed detailed critical analyses (one, two, three, and four) of the modulation-recognition datasets put forth publicly by DeepSig in 2016-2018. These datasets are associated with some of their published or arxiv.org papers, such as The Literature [R138], which I also reviewed here.

My conclusion is that the DeepSig datasets are as flawed as the DeepSig papers–it was the highly flawed nature of the papers that got me started down the critical-review path in the first place.

A reader recently alerted me to a change in the Datasets page at deepsig.ai that may indicate they are listening to critics. Let’s take a look and see if there is anything more to say.

Continue reading “Update on DeepSig Datasets”

The Altar of Optimality

Danger Will Robinson! Non-technical post approaching!

When I was a wee engineer, I’d sometimes clash with other engineers that sneered at technical approaches that didn’t set up a linear-algebraic optimization problem as the first step. Never mind that I’ve been relentlessly focused on single-sensor problems, rather than array-processing problems, and so the naturalness of the linear-algebraic mathematical setting was debatable–however there were still ways to fashion matrices and compute those lovely eigenvalues. The real issue wasn’t the dimensionality of the data model, it was that I didn’t have a handy crank I could turn and pop out a provably optimal solution to the posed problem. Therefore I could be safely ignored. And if nobody could actually write down an optimization problem for, say, general radio-frequency scene analysis, then that problem just wasn’t worth pursuing.

Those critical engineers worship at the altar of optimality. Time for another rant.

Continue reading “The Altar of Optimality”

Is Radio-Frequency Scene Analysis a Wicked Problem?

‘By the pricking of my thumbs, something wicked this way comes …’ Macbeth by W. Shakespeare

I attended a conference on dynamic spectrum access in 2017 and participated in a session on automatic modulation recognition. The session was connected to a live competition within the conference where participants would attempt to apply their modulation-recognition system to signals transmitted in the conference center by the conference organizers. Like a grand modulation-recognition challenge but confined to the temporal, spectral, and spatial constraints imposed by the short-duration conference.

What I didn’t know going in was the level of frustration on the part of the machine-learner organizers regarding the seeming inability of signal-processing and machine-learning researchers to solve the radio-frequency scene analysis problem once and for all. The basic attitude was ‘if the image-processors can have the AlexNet image-recognition solution, and thereby abandon their decades-long attempt at developing serious mathematics-based image-processing theory and practice, why haven’t we solved the RFSA problem yet?’

Continue reading “Is Radio-Frequency Scene Analysis a Wicked Problem?”

PSK/QAM Cochannel Dataset for Modulation Recognition Researchers [CSPB.ML.2023]

The next step in dataset complexity at the CSP Blog: cochannel signals.

I’ve developed another dataset for use in assessing modulation-recognition algorithms (machine-learning-based or otherwise) that is more complex than the original sets I posted for the ML Challenge (CSPB.ML.2018 and CSPB.ML.2022). Half of the new dataset consists of one signal in noise and the other half consists of two signals in noise. In most cases the two signals overlap spectrally, which is a signal condition called cochannel interference.

We’ll call it CSPB.ML.2023.

Continue reading “PSK/QAM Cochannel Dataset for Modulation Recognition Researchers [CSPB.ML.2023]”

ChatGPT and CSP

Am I out of a job?

Update January 31, 2023: I’ve added numbers in square brackets next to the worst of the wrong things. I’ll document the errors at the bottom of the post.


Of course I have to see what ChatGPT has to say about CSP. Including definitions, which I don’t expect it to get too wrong, and code for estimators, which I expect it to get very wrong.

Let’s take a look.

Continue reading “ChatGPT and CSP”

ICARUS: More on Attempts to Merge IQ Data with Extracted-Feature Data in Machine Learning

How can we train a neural network to make use of both IQ data samples and CSP features in the context of weak-signal detection?

I’ve been working with some colleagues at Northeastern University (NEU) in Boston, MA, on ways to combine CSP with machine learning. The work I’m doing with Old Dominion University is focused on basic modulation recognition using neural networks and, in particular, the generalization (dataset-shift) problem that is pervasive in deep learning with convolution neural networks. In contrast, the NEU work is focused on specific signal detection and classification problems and looks at how to use multiple disparate data types as inputs to neural-networks; inputs such as complex-valued samples (IQ data) as well as carefully selected components of spectral correlation and spectral coherence surfaces.

My NEU colleagues and I will be publishing a rather lengthy conference paper on a new multi-input-data neural-network approach called ICARUS at InfoCom 2023 this May (My Papers [53]). You can get a copy of the pre-publication version here or on arxiv.org.

Continue reading “ICARUS: More on Attempts to Merge IQ Data with Extracted-Feature Data in Machine Learning”

Correcting the Record: Comments On “Wireless Signal Representation Techniques for Automatic Modulation Classification,” by X. Liu et al

It’s too close to home, and it’s too near the bone …

Park the car at the side of the road
You should know
Time’s tide will smother you…
And I will too

“That Joke Isn’t Funny Anymore” by The Smiths

I applaud the intent behind the paper in this post’s title, which is The Literature [R183], apparently accepted in 2022 for publication in IEEE Access, a peer-reviewed journal. That intent is to list all the found ways in which researchers preprocess radio-frequency data (complex sampled data) prior to applying some sort of modulation classification (recognition) algorithm or system.

The problem is that this attempt at gathering up all of the ‘representations’ gets a lot of the math wrong, and so has a high potential to confuse rather than illuminate.

There’s only one thing to do: correct the record.

Continue reading “Correcting the Record: Comments On “Wireless Signal Representation Techniques for Automatic Modulation Classification,” by X. Liu et al”

Neural Networks for Modulation Recognition: IQ-Input Networks Do Not Generalize, but Cyclic-Cumulant-Input Networks Generalize Very Well

Neural networks with CSP-feature inputs DO generalize in the modulation-recognition problem setting.

In some recently published papers (My Papers [50,51]), my ODU colleagues and I showed that convolutional neural networks and capsule networks do not generalize well when their inputs are complex-valued data samples, commonly referred to as simply IQ samples, or as raw IQ samples by machine learners.(Unclear why the adjective ‘raw’ is often used as it adds nothing to the meaning. If I just say Hey, pass me those IQ samples, would ya?, do you think maybe he means the processed ones? How about raw-I-mean–seriously-man–I-did-not-touch-those-numbers-OK? IQ samples? All-natural vegan unprocessed no-GMO organic IQ samples? Uncooked IQ samples?) Moreover, the capsule networks typically outperform the convolutional networks.

In a new paper (MILCOM 2022: My Papers [52]; arxiv.org version), my colleagues and I continue this line of research by including cyclic cumulants as the inputs to convolutional and capsule networks. We find that capsule networks outperform convolutional networks and that convolutional networks trained on cyclic cumulants outperform convolutional networks trained on IQ samples. We also find that both convolutional and capsule networks trained on cyclic cumulants generalize perfectly well between datasets that have different (disjoint) probability density functions governing their carrier frequency offset parameters.

That is, convolutional networks do better recognition with cyclic cumulants and generalize very well with cyclic cumulants.

So why don’t neural networks ever ‘learn’ cyclic cumulants with IQ data at the input?

The majority of the software and analysis work is performed by the first author, John Snoap, with an assist on capsule networks by James Latshaw. I created the datasets we used (available here on the CSP Blog [see below]) and helped with the blind parameter estimation. Professor Popescu guided us all and contributed substantially to the writing.

Continue reading “Neural Networks for Modulation Recognition: IQ-Input Networks Do Not Generalize, but Cyclic-Cumulant-Input Networks Generalize Very Well”

Critic and Skeptic Roundup

“That was excellently observed,” say I, when I read a passage in an author, where his opinion agrees with mine. When we differ, there I pronounce him to be mistaken.
– Jonathan Swift

Update May 2023: Added Professor Gary Smith, author of The AI Delusion, Standard Deviations, and the newer Distrust: Big Data, Data-Torturing, and the Assault on Science.

Update November 2022: Added Professor Dave and Michael Woudenberg. (How could I have forgotten Dave in the first post??)

***

A big part of the CSP Blog in the past couple years has been a critical analysis of relevant engineering literature. By ‘relevant’ I mean relevant to CSP and its main applications of presence detection, modulation recognition, parameter estimation, source separation, and array processing. So I’ve produced many ‘Comments On …’ posts lately, and this tends to solidify my reputation as a critic rather than as a creative engineer. However, the CSP and SPTK posts on the CSP Blog still vastly outnumber the ‘Comments On …’ posts. We’ll see what balance the future brings.

But if you like to see critical reviews of current science, technology, and engineering work, there are others out there doing it much better than me and doing it for much more important topics, such as artificial intelligence, particle physics, cosmology, and social media.

So here in this post I want to introduce you, should you care to view the rest of the post, to other critics that you might enjoy, and might use as a balance against the typically credulous mainstream media and those pundits, bloggers, and YouTubers that do more promoting (influencing?) than analyzing.

Continue reading “Critic and Skeptic Roundup”

Epistemic Bubbles: Comments on “Modulation Recognition Using Signal Enhancement and Multi-Stage Attention Mechanism” by Lin, Zeng, and Gong.

Another brick in the wall, another drop in the bucket, another windmill on the horizon …

Let’s talk more about The Cult. No, I don’t mean She Sells Sanctuary, for which I do have considerable nostalgic fondness. I mean the Cult(ure) of Machine Learning in RF communications and signal processing. Or perhaps it is more of an epistemic bubble where there are The Things That Must Be Said and The Unmentionables in every paper and a style of research that is strictly adhered to but that, sadly, produces mostly error and promotes mostly hype. So we have shibboleths, taboos, and norms to deal with inside the bubble.

Time to get on my high horse. She’s a good horse named Ravager and she needs some exercise. So I’m going to strap on my claymore, mount Ravager, and go for a ride. Or am I merely tilting at windmills?

Let’s take a close look at another paper on machine learning for modulation recognition. It uses, uncritically, the DeepSig RML 2016 datasets. And the world and the world, the world drags me down…

Continue reading “Epistemic Bubbles: Comments on “Modulation Recognition Using Signal Enhancement and Multi-Stage Attention Mechanism” by Lin, Zeng, and Gong.”

What is the Minimum Effort Required to Find ‘Related Work?’: Comments on Some Spectrum-Sensing Literature by N. West [R176] and T. Yucek [R178]

Starts as a personal gripe, but ends with weird stuff from the literature.

During my poking around on arxiv.org the other day (Grrrrr…), I came across some postings by O’Shea et al I’d not seen before, including The Literature [R176]: “Wideband Signal Localization and Spectral Segmentation.”

Huh, I thought, they are probably trying to train a neural network to do automatic spectral segmentation that is superior to my published algorithm (My Papers [32]). Yeah, no. I mean yes to a machine, no to nods to me. Let’s take a look.

Continue reading “What is the Minimum Effort Required to Find ‘Related Work?’: Comments on Some Spectrum-Sensing Literature by N. West [R176] and T. Yucek [R178]”