The Laplace transform easily handles signals that are not Fourier transformable by introducing an exponential damping function inside the transform integral.
Previous SPTK Post: MATLAB’s resample.m Next SPTK Post: TBD
In this Signal Processing ToolKit post, we look at a generalization of the Fourier transform called the Laplace Transform. This is a stepping stone on the way to the Z Transform, which is widely used in discrete-time signal processing, especially in control theory.
Continue reading “SPTK: The Laplace Transform”
‘Insufficient facts always invite danger.’
Spock in Star Trek TOS Episode “Space Seed”
As most CSP Blog readers likely know, I’ve performed detailed critical analyses (one, two, three, and four) of the modulation-recognition datasets put forth publicly by DeepSig in 2016-2018. These datasets are associated with some of their published or arxiv.org papers, such as The Literature [R138], which I also reviewed here.
My conclusion is that the DeepSig datasets are as flawed as the DeepSig papers–it was the highly flawed nature of the papers that got me started down the critical-review path in the first place.
A reader recently alerted me to a change in the Datasets page at deepsig.ai that may indicate they are listening to critics. Let’s take a look and see if there is anything more to say.
Continue reading “Update on DeepSig Datasets”
Danger Will Robinson! Non-technical post approaching!
When I was a wee engineer, I’d sometimes clash with other engineers that sneered at technical approaches that didn’t set up a linear-algebraic optimization problem as the first step. Never mind that I’ve been relentlessly focused on single-sensor problems, rather than array-processing problems, and so the naturalness of the linear-algebraic mathematical setting was debatable–however there were still ways to fashion matrices and compute those lovely eigenvalues. The real issue wasn’t the dimensionality of the data model, it was that I didn’t have a handy crank I could turn and pop out a provably optimal solution to the posed problem. Therefore I could be safely ignored. And if nobody could actually write down an optimization problem for, say, general radio-frequency scene analysis, then that problem just wasn’t worth pursuing.
Those critical engineers worship at the altar of optimality. Time for another rant.
Continue reading “The Altar of Optimality”
Sometimes MATLAB’s resample.m gives results that can be trouble for subsequent CSP.
Previous SPTK Post: Echo Detection Next SPTK Post: The Laplace Transform
In this brief Signal Processing Toolkit note, I warn you about relying on resample.m to increase the sampling rate of your data. It works fine a lot of the time, but when the signal has significant energy near the band edges, it does not.
Continue reading “SPTK Addendum: Problems with resampling using MATLAB’s resample.m”
CSP can be used to separate cochannel contemporaneous signals. The involved signal-processing structure is linear but periodically time-varying.
In most of the posts on the CSP Blog we’ve applied the theory and tools of CSP to parameter estimation of one sort or another: cycle-frequency estimation, time-delay estimation, synchronization-parameter estimation, and of course estimation of the spectral correlation, spectral coherence, cyclic cumulant, and cyclic polyspectral functions.
In this post, we’ll switch gears a bit and look at the problem of waveform estimation. This comes up in two situations for me: single-sensor processing and array (multi-sensor) processing. At some point, I’ll write a post on array processing for waveform estimation (using, say, the SCORE algorithm The Literature [R102]), but here we restrict our attention to the case of waveform estimation using only a single sensor (a single antenna connected to a single receiver). We just have one observed sampled waveform to work with. There are also waveform estimation methods that are multi-sensor but not typically referred to as array processing, such as the blind source separation problem in acoustic scene analysis, which is often solved by principal component analysis (PCA), independent component analysis (ICA), and their variants.
The signal model consists of the noisy sum of two or more modulated waveforms that overlap in both time and frequency. If the signals do not overlap in time, then we can separate them by time gating, and if they do not overlap in frequency, we can separate them using linear time-invariant systems (filters).
Relevant FRESH filtering publications include My Papers [45, 46] and The Literature [R6].
Continue reading “Frequency Shift (FRESH) Filtering for Single-Sensor Cochannel Signal Separation”
Update January 31, 2023: I’ve added numbers in square brackets next to the worst of the wrong things. I’ll document the errors at the bottom of the post.
Of course I have to see what ChatGPT has to say about CSP. Including definitions, which I don’t expect it to get too wrong, and code for estimators, which I expect it to get very wrong.
Let’s take a look.
Continue reading “ChatGPT and CSP”
Let’s apply some of our Signal Processing ToolKit tools to a problem in forensic signal processing!
Previous SPTK Post: The Sampling Theorem Next SPTK Post: Resampling in MATLAB
No, not that prisoner’s dilemma. The dilemma of a prisoner that claims, steadfastly, innocence. Even in the face of strong evidence and a fair jury trial.
In this Signal Processing ToolKit cul-de-sac of a post, we’ll look into a signal-processing adventure involving a digital sting recording and a claim of evidence tampering. We’ll be able to use some of our SPTK tools to investigate a real-world data record that might, just might, have been tampered with. (But most probably not!)
Continue reading “SPTK: Echo Detection and the Prisoner’s Dilemma”
It’s too close to home, and it’s too near the bone …
Park the car at the side of the road
“That Joke Isn’t Funny Anymore” by The Smiths
You should know
Time’s tide will smother you…
And I will too
I applaud the intent behind the paper in this post’s title, which is The Literature [R183], apparently accepted in 2022 for publication in IEEE Access, a peer-reviewed journal. That intent is to list all the found ways in which researchers preprocess radio-frequency data (complex sampled data) prior to applying some sort of modulation classification (recognition) algorithm or system.
The problem is that this attempt at gathering up all of the ‘representations’ gets a lot of the math wrong, and so has a high potential to confuse rather than illuminate.
There’s only one thing to do: correct the record.
Continue reading “Correcting the Record: Comments On “Wireless Signal Representation Techniques for Automatic Modulation Classification,” by X. Liu et al”
Neural networks with CSP-feature inputs DO generalize in the modulation-recognition problem setting.
In some recently published papers (My Papers [50,51]), my ODU colleagues and I showed that convolutional neural networks and capsule networks do not generalize well when their inputs are complex-valued data samples, commonly referred to as simply IQ samples, or as raw IQ samples by machine learners.(Unclear why the adjective ‘raw’ is often used as it adds nothing to the meaning. If I just say Hey, pass me those IQ samples, would ya?, do you think maybe he means the processed ones? How about raw-I-mean–seriously-man–I-did-not-touch-those-numbers-OK? IQ samples? All-natural vegan unprocessed no-GMO organic IQ samples? Uncooked IQ samples?) Moreover, the capsule networks typically outperform the convolutional networks.
In a new paper (MILCOM 2022: My Papers ; arxiv.org version), my colleagues and I continue this line of research by including cyclic cumulants as the inputs to convolutional and capsule networks. We find that capsule networks outperform convolutional networks and that convolutional networks trained on cyclic cumulants outperform convolutional networks trained on IQ samples. We also find that both convolutional and capsule networks trained on cyclic cumulants generalize perfectly well between datasets that have different (disjoint) probability density functions governing their carrier frequency offset parameters.
That is, convolutional networks do better recognition with cyclic cumulants and generalize very well with cyclic cumulants.
So why don’t neural networks ever ‘learn’ cyclic cumulants with IQ data at the input?
The majority of the software and analysis work is performed by the first author, John Snoap, with an assist on capsule networks by James Latshaw. I created the datasets we used (available here on the CSP Blog [see below]) and helped with the blind parameter estimation. Professor Popescu guided us all and contributed substantially to the writing.
Continue reading “Neural Networks for Modulation Recognition: IQ-Input Networks Do Not Generalize, but Cyclic-Cumulant-Input Networks Generalize Very Well”
Another brick in the wall, another drop in the bucket, another windmill on the horizon …
Let’s talk more about The Cult. No, I don’t mean She Sells Sanctuary, for which I do have considerable nostalgic fondness. I mean the Cult(ure) of Machine Learning in RF communications and signal processing. Or perhaps it is more of an epistemic bubble where there are The Things That Must Be Said and The Unmentionables in every paper and a style of research that is strictly adhered to but that, sadly, produces mostly error and promotes mostly hype. So we have shibboleths, taboos, and norms to deal with inside the bubble.
Time to get on my high horse. She’s a good horse named Ravager and she needs some exercise. So I’m going to strap on my claymore, mount Ravager, and go for a ride. Or am I merely tilting at windmills?
Let’s take a close look at another paper on machine learning for modulation recognition. It uses, uncritically, the DeepSig RML 2016 datasets. And the world and the world, the world drags me down…
Continue reading “Epistemic Bubbles: Comments on “Modulation Recognition Using Signal Enhancement and Multi-Stage Attention Mechanism” by Lin, Zeng, and Gong.”
Introducing swag for the best CSP-Blog commenters.
Update January 2023: You can find the list of winners on this page.
The comments that CSP Blog readers have made over the past six years are arguably the most helpful part of the Blog for do-it-yourself CSP practitioners. In those comments, my many errors have been revealed, which then has permitted me to attempt post corrections. Many unclear aspects of a post have been clarified after pondering a reader’s comment. At least one comment has been elevated to a post of its own.
The readership of the CSP Blog has been steadily growing since its inception in 2015, but the ratio of page views to comments remains huge–the vast majority of readers do not comment. This is understandable and perfectly acceptable. I rarely comment on any of the science and engineering blogs that I frequent. Nevertheless, I would like to encourage more commenting and also reward it.
Continue reading “‘Comment of the Month’ on the CSP Blog”
Starts as a personal gripe, but ends with weird stuff from the literature.
During my poking around on arxiv.org the other day (Grrrrr…), I came across some postings by O’Shea et al I’d not seen before, including The Literature [R176]: “Wideband Signal Localization and Spectral Segmentation.”
Huh, I thought, they are probably trying to train a neural network to do automatic spectral segmentation that is superior to my published algorithm (My Papers ). Yeah, no. I mean yes to a machine, no to nods to me. Let’s take a look.
Continue reading “What is the Minimum Effort Required to Find ‘Related Work?’: Comments on Some Spectrum-Sensing Literature by N. West [R176] and T. Yucek [R178]”
May 2022 saw 6026 page views at the CSP Blog, a new monthly record!
Thanks so much to all my readers, new and old, signal processors and machine learners, commenters and lurkers.
My next non-ranty post is on frequency-shift (FRESH) filtering. I will go over cyclic Wiener filtering (The Literature [R6]), which is optimal FRESH filtering, and then describe some interesting puzzles and problems with CW filtering, which may form the seeds of some solid signal-processing research projects of the academic sort.
Black-box thinking is degrading our ability to connect effects to causes.
I’m learning, slowly because I’m stubborn and (I know it is hard to believe) optimistic, that there is no bottom. Signal processing and communications theory and practice are being steadily degraded in the world’s best (and worst of course) peer-reviewed journals.
I saw the accepted paper in the post title (The Literature [R177]) and thought this could be better than most of the machine-learning modulation-recognition papers I’ve reviewed. It takes a little more effort to properly understand and generate direct-sequence spread-spectrum (DSSS) signals, and the authors will likely focus on the practical case where the inband SNR is low. Plus there are lots of connections to CSP. But no. Let’s take a look.
Continue reading “Elegy for a Dying Field: Comments on “Detection of Direct Sequence Spread Spectrum Signals Based on Deep Learning,” by F. Wei et al”
Neural networks with I/Q data as input do not generalize in the modulation-recognition problem setting.
Update May 20, 2022: Here is the arxiv.org link.
Back in 2018 I posted a dataset consisting of 112,000 I/Q data files, 32,768 samples in length each, as a part of a challenge to machine learners who had been making strong claims of superiority over signal processing in the area of automatic modulation recognition. One part of the challenge was modulation recognition involving eight digital modulation types, and the other was estimating the carrier frequency offset. That dataset is described here, and I’d like to refer to it as CSPB.ML.2018.
Then in 2022 I posted a companion dataset to CSPB.ML.2018 called CSPB.ML.2022. This new dataset uses the same eight modulation types, similar ranges of SNR, pulse type, and symbol rate, but the random variable that governs the carrier frequency offset is different with respect to the random variable in CSPB.ML.2018. The purpose of the CSPB.ML.2022 dataset is to facilitate studies of the dataset-shift, or generalization, problem in machine learning.
Throughout the past couple of years I’ve been working with some graduate students and a professor at Old Dominion University on merging machine learning and signal processing for problems involving RF signal analysis, such as modulation recognition. We are starting to publish a sequence of papers that describe our efforts. I briefly describe the results of one such paper, My Papers , in this post.
Continue reading “Some Concrete Results on Generalization in Modulation Recognition using Machine Learning”
While reading a book on string theory for lay readers, I did a double take…
I don’t know why I haven’t read any of Lee Smolin’s physics books prior to this year, but I haven’t. Maybe blame my obsession with Sean Carroll. In any case, I’ve been reading The Trouble with Physics (The Literature [R175]), which is about string theory and string theorists. Smolin finds it troubling that the string theorist subculture in physics shows some signs of groupthink and authoritarianism. Perhaps elder worship too.
I came across this list of attributes, conceived by Smolin, of the ‘sociology’ of the string-theorist contingent:
Continue reading “A Great American Science Writer: Lee Smolin”
The softwarization of engineering continues apace…
I keep seeing people write things like “a major disadvantage of the technique for X is that it requires substantial domain expertise.” Let’s look at a recent good paper that makes many such remarks and try to understand what it could mean, and if having or getting domain expertise is actually a bad thing. Spoiler: It isn’t.
The paper under the spotlight is The Literature [R174], “Interference Suppression Using Deep Learning: Current Approaches and Open Challenges,” published for the nonce on arxiv.org. I’m not calling this post a “Comments On …” post, because once I extract the (many) quotes about domain expertise, I’m leaving the paper alone. The paper is a good paper and I expect it to be especially useful for current graduate students looking to make a contribution in the technical area where machine learning and RF signal processing overlap. I especially like Figure 1 and the various Tables.
Continue reading “The Domain Expertise Trap”
Can we fix peer review in engineering by some form of payment to reviewers?
Let’s talk about another paper about cyclostationarity and correntropy. I’ve critically reviewed two previously, which you can find here and here. When you look at the correntropy as applied to a cyclostationary signal, you get something called cyclic correntropy, which is not particularly useful except if you don’t understand regular cyclostationarity and some aspects of garden-variety signal processing. Then it looks great.
But this isn’t a post that primarily takes the authors of a paper to task, although it does do that. I want to tell the tale to get us thinking about what ‘peer’ could mean, these days, in ‘peer-reviewed paper.’ How do we get the best peers to review our papers?
Let’s take a look at The Literature [R173].
Continue reading “Wow, Elsevier, Just … Wow. Comments On “Cyclic Correntropy: Properties and the Application in Symbol Rate Estimation Under Alpha-Stable Distributed Noise,” by S. Luan et al.”
The basics of how to convert a continuous-time signal into a discrete-time signal without losing information in the process. Plus, how the choice of sampling rate influences CSP.
Previous SPTK Post: Random Processes Next SPTK Post: Echo Detection
In this Signal Processing ToolKit post we take a close look at the basic sampling theorem used daily by signal-processing engineers. Application of the sampling theorem is a way to choose a sampling rate for converting an analog continuous-time signal to a digital discrete-time signal. The former is ubiquitous in the physical world–for example all the radio-frequency signals whizzing around in the air and through your body right now. The latter is ubiquitous in the computing-device world–for example all those digital-audio files on your
Discman Itunes Ipod DVD Smartphone Cloud Neuralink Singularity.
So how are those physical real-world analog signals converted to convenient lists of finite-precision numbers that we can apply arithmetic to? For that’s all [digital or cyclostationary] signal processing is at bottom: arithmetic. You might know the basic rule-of-thumb for choosing a sampling rate: Make sure it is at least twice as big as the largest frequency component in the analog signal undergoing the sampling. But why, exactly, and what does ‘largest frequency component’ mean?
Continue reading “SPTK: Sampling and The Sampling Theorem”
Let’s take a look at an even faster spectral correlation function estimator. How useful is it for CSP applications in communications?
Reader Gideon pointed out that Antoni had published a paper a year after the paper that I considered in my first Antoni post. This newer paper, The Literature [R172], promises a faster fast spectral correlation estimator, and it delivers on that according to the analysis in the paper. However, I think the faster fast spectral correlation estimator is just as limited as the slower fast spectral correlation estimator when considered in the context of communication-signal processing.
And, to be fair, Antoni doesn’t often consider the context of communication-signal processing. His favored application is fault detection in mechanical systems with rotating parts. But I still don’t think the way he compares his fast and faster estimators to conventional estimators is fair. The reason is that his estimators are both severely limited in the maximum cycle frequency that can be processed, relative to the maximum cycle frequency that is possible.
Let’s take a look.
Continue reading “Update on J. Antoni’s Fast Spectral Correlation Estimator”