What is the Minimum Effort Required to Find ‘Related Work?’: Comments on Some Spectrum-Sensing Literature by N. West [R176] and T. Yucek [R178]

Starts as a personal gripe, but ends with weird stuff from the literature.

During my poking around on arxiv.org the other day (Grrrrr…), I came across some postings by O’Shea et al I’d not seen before, including The Literature [R176]: “Wideband Signal Localization and Spectral Segmentation.”

Huh, I thought, they are probably trying to train a neural network to do automatic spectral segmentation that is superior to my published algorithm (My Papers [32]). Yeah, no. I mean yes to a machine, no to nods to me. Let’s take a look.

Continue reading “What is the Minimum Effort Required to Find ‘Related Work?’: Comments on Some Spectrum-Sensing Literature by N. West [R176] and T. Yucek [R178]”

Elegy for a Dying Field: Comments on “Detection of Direct Sequence Spread Spectrum Signals Based on Deep Learning,” by F. Wei et al

Black-box thinking is degrading our ability to connect effects to causes.

I’m learning, slowly because I’m stubborn and (I know it is hard to believe) optimistic, that there is no bottom. Signal processing and communications theory and practice are being steadily degraded in the world’s best (and worst of course) peer-reviewed journals.

I saw the accepted paper in the post title (The Literature [R177]) and thought this could be better than most of the machine-learning modulation-recognition papers I’ve reviewed. It takes a little more effort to properly understand and generate direct-sequence spread-spectrum (DSSS) signals, and the authors will likely focus on the practical case where the inband SNR is low. Plus there are lots of connections to CSP. But no. Let’s take a look.

Continue reading “Elegy for a Dying Field: Comments on “Detection of Direct Sequence Spread Spectrum Signals Based on Deep Learning,” by F. Wei et al”

Some Concrete Results on Generalization in Modulation Recognition using Machine Learning

Neural networks with I/Q data as input do not generalize in the modulation-recognition problem setting.

Update May 20, 2022: Here is the arxiv.org link.

Back in 2018 I posted a dataset consisting of 112,000 I/Q data files, 32,768 samples in length each, as a part of a challenge to machine learners who had been making strong claims of superiority over signal processing in the area of automatic modulation recognition. One part of the challenge was modulation recognition involving eight digital modulation types, and the other was estimating the carrier frequency offset. That dataset is described here, and I’d like to refer to it as CSPB.ML.2018.

Then in 2022 I posted a companion dataset to CSPB.ML.2018 called CSPB.ML.2022. This new dataset uses the same eight modulation types, similar ranges of SNR, pulse type, and symbol rate, but the random variable that governs the carrier frequency offset is different with respect to the random variable in CSPB.ML.2018. The purpose of the CSPB.ML.2022 dataset is to facilitate studies of the dataset-shift, or generalization, problem in machine learning.

Throughout the past couple of years I’ve been working with some graduate students and a professor at Old Dominion University on merging machine learning and signal processing for problems involving RF signal analysis, such as modulation recognition. We are starting to publish a sequence of papers that describe our efforts. I briefly describe the results of one such paper, My Papers [51], in this post.

Continue reading “Some Concrete Results on Generalization in Modulation Recognition using Machine Learning”

A Great American Science Writer: Lee Smolin

While reading a book on string theory for lay readers, I did a double take…

I don’t know why I haven’t read any of Lee Smolin’s physics books prior to this year, but I haven’t. Maybe blame my obsession with Sean Carroll. In any case, I’ve been reading The Trouble with Physics (The Literature [R175]), which is about string theory and string theorists. Smolin finds it troubling that the string theorist subculture in physics shows some signs of groupthink and authoritarianism. Perhaps elder worship too.

I came across this list of attributes, conceived by Smolin, of the ‘sociology’ of the string-theorist contingent:

Continue reading “A Great American Science Writer: Lee Smolin”

The Domain Expertise Trap

The softwarization of engineering continues apace…

I keep seeing people write things like “a major disadvantage of the technique for X is that it requires substantial domain expertise.” Let’s look at a recent good paper that makes many such remarks and try to understand what it could mean, and if having or getting domain expertise is actually a bad thing. Spoiler: It isn’t.

The paper under the spotlight is The Literature [R174], “Interference Suppression Using Deep Learning: Current Approaches and Open Challenges,” published for the nonce on arxiv.org. I’m not calling this post a “Comments On …” post, because once I extract the (many) quotes about domain expertise, I’m leaving the paper alone. The paper is a good paper and I expect it to be especially useful for current graduate students looking to make a contribution in the technical area where machine learning and RF signal processing overlap. I especially like Figure 1 and the various Tables.

Continue reading “The Domain Expertise Trap”

Shifted Dataset for the Machine-Learning Challenge: How Well Does a Modulation-Recognition DNN Generalize? [Dataset CSPB.ML.2022]

Another RF-signal dataset to help push along our R&D on modulation recognition.

In this post I provide a second dataset for the Machine-Learning Challenge I issued in 2018 (CSPB.ML.2018). This dataset is similar to the original dataset, but possesses a key difference in that the probability distribution of the carrier-frequency offset parameter, viewed as a random variable, is not the same, but is still realistic.

Blog Note: By WordPress’ count, this is the 100th post on the CSP Blog. Together with a handful of pages (like My Papers and The Literature), these hundred posts have resulted in about 250,000 page views. That’s an average of 2,500 page views per post. However, the variance of the per-post pageviews is quite large. The most popular is The Spectral Correlation Function (> 16,000) while the post More on Pure and Impure Sinewaves, from the same era, has only 316 views. A big Thanks to all my readers!!

Continue reading “Shifted Dataset for the Machine-Learning Challenge: How Well Does a Modulation-Recognition DNN Generalize? [Dataset CSPB.ML.2022]”

One Last Time …

We take a quick look at a fourth DeepSig dataset called 2016.04C.multisnr.tar.bz2 in the context of the data-shift problem in machine learning.

And if we get this right,

We’re gonna teach ’em how to say

Goodbye …

You and I.

Lin-Manuel Miranda, “One Last Time,” Hamilton

I didn’t expect to have to do this, but I am going to analyze yet another DeepSig dataset. One last time. This one is called 2016.04C.multisnr.tar.bz2, and is described thusly on the DeepSig website:

Figure 1. Description of various DeepSig data sets found on the DeepSig website as of November 2021.

I’ve analyzed the 2018 dataset here, the RML2016.10b.tar.bz2 dataset here, and the RML2016.10a.tar.bz2 dataset here.

Now I’ve come across a manuscript-in-review in which both the RML2016.10a and RML2016.04c data sets are used. The idea is that these two datasets represent two sufficiently distinct datasets so that they are good candidates for use in a data-shift study involving trained neural-network modulation-recognition systems.

The data-shift problem is, as one researcher puts it:

Data shift or data drift, concept shift, changing environments, data fractures are all similar terms that describe the same phenomenon: the different distribution of data between train and test sets

Georgios Sarantitis

But … are they really all that different?

Continue reading “One Last Time …”

The Signal-Processing Equivalent of Resume-Padding? Comments on “A Robust Modulation Classification Method Using Convolutional Neural Networks” by S. Zhou et al.

Does the use of ‘total SNR’ mislead when the fractional bandwidth is very small? What constitutes ‘weak-signal processing?’

Or maybe “Comments on” here should be “Questions on.”

In a recent paper in EURASIP Journal on Advances in Signal Processing (The Literature [R165]), the authors tackle the problem of machine-learning-based modulation recognition for highly oversampled rectangular-pulse digital signals. They don’t use the DeepSig data sets, but their data-set description and use of ‘signal-to-noise ratio’ leaves a lot to be desired. Let’s take a brief look. See if you agree with me that the touting of their results as evidence that they can reliably classify signals with ‘SNRs of -10 dB’ is unwarranted and misleading.

Continue reading “The Signal-Processing Equivalent of Resume-Padding? Comments on “A Robust Modulation Classification Method Using Convolutional Neural Networks” by S. Zhou et al.”

Comments on “Deep Neural Network Feature Designs for RF Data-Driven Wireless Device Classification,” by B. Hamdaoui et al

Another post-publication review of a paper that is weak on the ‘RF’ in RF machine learning.

Let’s take a look at a recently published paper (The Literature [R148]) on machine-learning-based modulation-recognition to get a data point on how some electrical engineers–these are more on the side of computer science I believe–use mathematics when they turn to radio-frequency problems. You can guess it isn’t pretty, and that I’m not here to exalt their acumen.

Continue reading “Comments on “Deep Neural Network Feature Designs for RF Data-Driven Wireless Device Classification,” by B. Hamdaoui et al”

Are Probability Density Functions “Engineered” or “Hand-Crafted” Features?

The Machine Learners think that their “feature engineering” (rooting around in voluminous data) is the same as “features” in mathematically derived signal-processing algorithms. I take a lighthearted look.

One of the things the machine learners never tire of saying is that their neural-network approach to classification is superior to previous methods because, in part, those older methods use hand-crafted features. They put it in different ways, but somewhere in the introductory section of a machine-learning modulation-recognition paper (ML/MR), you’ll likely see the claim. You can look through the ML/MR papers I’ve cited in The Literature ([R133]-[R146]) if you are curious, but I’ll extract a couple here just to illustrate the idea.

Continue reading “Are Probability Density Functions “Engineered” or “Hand-Crafted” Features?”

DeepSig’s 2018 Data Set: 2018.01.OSC.0001_1024x2M.h5.tar.gz

The third DeepSig data set I’ve examined. It’s better!

Update February 2021. I added material relating to the DeepSig-claimed variation of the roll-off parameter in a square-root raised-cosine pulse-shaping function. It does not appear that the roll-off was actually varied as stated in Table I of [R137].

DeepSig’s data sets are popular in the machine-learning modulation-recognition community, and in that community there are many claims that the deep neural networks are vastly outperforming any expertly hand-crafted tired old conventional method you care to name (none are usually named though). So I’ve been looking under the hood at these data sets to see what the machine learners think of as high-quality inputs that lead to disruptive upending of the sclerotic mod-rec establishment. In previous posts, I’ve looked at two of the most popular DeepSig data sets from 2016 (here and here). In this post, we’ll look at one more and I will then try to get back to the CSP posts.

Let’s take a look at one more DeepSig data set: 2018.01.OSC.0001_1024x2M.h5.tar.gz.

Continue reading “DeepSig’s 2018 Data Set: 2018.01.OSC.0001_1024x2M.h5.tar.gz”

More on DeepSig’s RML Data Sets

The second DeepSig data set I analyze: SNR problems and strange PSDs.

I presented an analysis of one of DeepSig’s earlier modulation-recognition data sets (RML2016.10a.tar.bz2) in the post on All BPSK Signals. There we saw several flaws in the data set as well as curiosities. Most notably, the signals in the data set labeled as analog amplitude-modulated single sideband (AM-SSB) were absent: these signals were only noise. DeepSig has several other data sets on offer at the time of this writing:

In this post, I’ll present a few thoughts and results for the “Larger Version” of RML2016.10a.tar.bz2, which is called RML2016.10b.tar.bz2. This is a good post to offer because it is coherent with the first RML post, but also because more papers are being published that use the RML 10b data set, and of course more such papers are in review. Maybe the offered analysis here will help reviewers to better understand and critique the machine-learning papers. The latter do not ever contain any side analysis or validation of the RML data sets (let me know if you find one that does in the Comments below), so we can’t rely on the machine learners to assess their inputs. (Update: I analyze a third DeepSig data set here. And a fourth and final one here.)

Continue reading “More on DeepSig’s RML Data Sets”

All BPSK Signals

An analysis of DeepSig’s 2016.10A data set, used in many published machine-learning papers, and detailed comments on quite a few of those papers.

Update March 2021

See my analyses of three other DeepSig datasets here, here, and here.

Update June 2020

I’ll be adding new papers to this post as I find them. At the end of the original post there is a sequence of date-labeled updates that briefly describe the relevant aspects of the newly found papers. Some machine-learning modulation-recognition papers deserve their own post, so check back at the CSP Blog from time-to-time for “Comments On …” posts.

Continue reading “All BPSK Signals”

Can a Machine Learn a Power Spectrum Estimator?

Learning machine learning for radio-frequency signal-processing problems, continued.

I continue with my foray into machine learning (ML) by considering whether we can use widely available ML tools to create a machine that can output accurate power spectrum estimates. Previously we considered the perhaps simpler problem of learning the Fourier transform. See here and here.

Along the way I’ll expose my ignorance of the intricacies of machine learning and my apparent inability to find the correct hyperparameter settings for any problem I look at. But, that’s where you come in, dear reader. Let me know what to do!

Continue reading “Can a Machine Learn a Power Spectrum Estimator?”

Data Set for the Machine-Learning Challenge [CSPB.ML.2018]

A PSK/QAM/SQPSK data set with randomized symbol rate, inband SNR, carrier-frequency offset, and pulse roll-off.

Update April 2022. I’ve also posted a second dataset here. This new dataset is similar to the original ML Challenge dataset except the random variable representing the carrier frequency offset has a slightly different distribution.

If you refer to either of the posted datasets in a published paper, please use the following designators, which I am also using in papers I’m attempting to publish:

Original ML Challenge Dataset: CSPB.ML.2018.

Shifted ML Challenge Dataset: CSPB.ML.2022.

Update September 2020. I made a mistake when I created the signal-parameter “truth” files signal_record.txt and signal_record_first_20000.txt. Like the DeepSig RML data sets that I analyzed on the CSP Blog here and here, the SNR parameter in the truth files did not match the actual SNR of the signals in the data files. I’ve updated the truth files and the links below. You can still use the original files for all other signal parameters, but the SNR parameter was in error.

Update July 2020. I originally posted 20,000 signals in the posted data set. I’ve now added another 92,000 for a total of 112,000 signals. The original signals are contained in Batches 1-5, the additional signals in Batches 6-28. I’ve placed these additional Batches at the end of the post to preserve the original post’s content.

Continue reading “Data Set for the Machine-Learning Challenge [CSPB.ML.2018]”

How we Learned CSP

We learned it using abstractions involving various infinite quantities. Can a machine learn it without that advantage?

This post is just a blog post. Just some guy on the internet thinking out loud. If you have relevant thoughts or arguments you’d like to advance, please leave them in the Comments section at the end of the post.

How did we, as people not machines, learn to do cyclostationary signal processing? We’ve successfully applied it to many real-world problems, such as weak-signal detection, interference-tolerant detection, interference-tolerant time-delay estimation, modulation recognition, joint multiple-cochannel-signal modulation recognition (My Papers [25,26,28,38,43]), synchronization (The Literature [R7]), beamforming (The Literature [R102,R103]), direction-finding (The Literature [R104-R106]), detection of imminent mechanical failures (The Literature [R017-R109]), linear time-invariant system identification (The Literature [R110-R115]), and linear periodically time-variant filtering for cochannel signal separation (FRESH filtering) (My Papers [45], The Literature [R6]).

How did this come about? Is it even interesting to ask the question? Well, it is to me. I ask it because of the current hot topic in signal processing: machine learning. And in particular, machine learning applied to modulation recognition (see here and here and here and here). The machine learners want to capitalize on the success of machine learning as applied to image recognition by directly applying the same sorts of image-recognition techniques to the problem of automatic type-recognition for human-made electromagnetic waves.

Continue reading “How we Learned CSP”

A Challenge for the Machine Learners

The machine-learning modulation-recognition community consistently claims vastly superior performance to anything that has come before. Let’s test that.

Update April 2022

I’ve also posted a second dataset here. This new dataset is similar to the original ML Challenge dataset except the random variable representing the carrier frequency offset has a slightly different distribution.

If you refer to either of the posted datasets in a published paper, please use the following designators, which I am also using in papers I’m attempting to publish:

Original ML Challenge Dataset: CSPB.ML.2018.

Shifted ML Challenge Dataset: CSPB.ML.2022.

Update February 2019

I’ve decided to post the data set I discuss here to the CSP Blog for all interested parties to use. See the new post on the Data Set. If you do use it, please let me and the CSP Blog readers know how you fared with your experiments in the Comments section of either post. Thanks!

Continue reading “A Challenge for the Machine Learners”

‘Can a Machine Learn the Fourier Transform?’ Redux, Plus Relevant Comments on a Machine-Learning Paper by M. Kulin et al.

Reconsidering my first attempt at teaching a machine the Fourier transform with the help of a CSP Blog reader. Also, the Fourier transform is viewed by Machine Learners as an input data representation, and that representation matters.

I first considered whether a machine (neural network) could learn the (64-point, complex-valued)  Fourier transform in this post. I used MATLAB’s Neural Network Toolbox and I failed to get good learning results because I did not properly set the machine’s hyperparameters. A kind reader named Vito Dantona provided a comment to that original post that contained good hyperparameter selections, and I’m going to report the new results here in this post.

Since the Fourier transform is linear, the machine should be set up to do linear processing. It can’t just figure that out for itself. Once I used Vito’s suggested hyperparameters to force the machine to be linear, the results became much better:

Continue reading “‘Can a Machine Learn the Fourier Transform?’ Redux, Plus Relevant Comments on a Machine-Learning Paper by M. Kulin et al.”

Can a Machine Learn the Fourier Transform?

Well, can it? I mean, can it REALLY? Or just approximately?

Update: See Part 2 of this post at this link. If you want to leave on comment, leave it on Part 2. Comments closed on this Part 1 post.

Continue reading “Can a Machine Learn the Fourier Transform?”

Machine Learning and Modulation Recognition: Comments on “Convolutional Radio Modulation Recognition Networks” by T. O’Shea, J. Corgan, and T. Clancy

Update October 2020:

Since I wrote the paper review in this post, I’ve analyzed three of O’Shea’s data sets (O’Shea is with the company DeepSig, so I’ve been referring to the data sets as DeepSig’s in other posts): All BPSK Signals, More on DeepSig’s Data Sets, and DeepSig’s 2018 Data Set. The data set relating to this paper is analyzed in All BPSK Signals. Preview: It is heavily flawed.

Continue reading “Machine Learning and Modulation Recognition: Comments on “Convolutional Radio Modulation Recognition Networks” by T. O’Shea, J. Corgan, and T. Clancy”