As most CSP Blog readers likely know, I’ve performed detailed critical analyses (one, two, three, and four) of the modulation-recognition datasets put forth publicly by DeepSig in 2016-2018. These datasets are associated with some of their published or arxiv.org papers, such as The Literature [R138], which I also reviewed here.
My conclusion is that the DeepSig datasets are as flawed as the DeepSig papers–it was the highly flawed nature of the papers that got me started down the critical-review path in the first place.
A reader recently alerted me to a change in the Datasets page at deepsig.ai that may indicate they are listening to critics. Let’s take a look and see if there is anything more to say.
Here is the updated page at deepsig.ai/datasets:
We see that there are “known errata” but that the datasets are still available for download, as ever. However, each one is now called a “Historical” dataset. And it is true that those datasets (the final one includes the hoary string ‘2018’) are ancient, old news, superannuated. In fact, they all come from that distant, hazy, innocent era known as “Before GPT,” which we’ll just call BGPT. If there are any old-school researchers that care about BGPT material, DeepSig is kindly keeping the flame alive. Fine.
But … there is no mention of the nature of the errata (errors). Typically people use the word errata to denote errors such as omissions or typographical errors rather than major conceptual errors or massive programming errors, or at least they did BGPT. Those latter errors are more clearly referred to as flaws and bugs, respectively.
The main point is that we get the “mistakes were made” admission but the vibe is “here is the error-filled material anyway, find the mistakes yourself if you care about that sort of historical, merely academic, thing.” Caveat emptor! I wouldn’t, actually, care much about this, except for the fact that lots of people have used this data to make many many many grandiose claims about ML-based modulation-recognition performance as well as relative claims about “the signal-processing state of the art” (about which they know nothing). Remember, this is the sum total of the higher-order moment mathematics put forth in O’Shea’s The Literature [R138]:
Regarding all those learners and their claims, a simple Google Scholar search reveals The Literature [R138] is cited by at least 1078 papers. (I feel like I’ve had to slog through half of those myself.)
So does DeepSig care about those 1078 researchers (really a couple thousand, since hardly any papers are single-author papers)? What about all the other researchers, students, and practicing engineers who read those papers and came away with certain rosy conclusions about ML for MR? Are all those papers’ conclusions invalidated by the errata admission or not? Do we care about these people and their system-design choices? What about the researchers and students who are, right now at this very moment, trying out their networks on these datasets in an effort to outperform one of those 1078 ML/MR performance papers? What should they be told?
Why not just tell us what the errors are?
Where is the link to the “known errata?”
(DeepSig: Feel free to use these: All BPSK Signals, More on DeepSig Datasets, 2018 RML, One Last Time.)
Why are DeepSig’s fellow machine learners being treated this way?
h/t Steve F.
One thought on “Update on DeepSig Datasets”
I was born in 34 BGPT.
I’m luckily enough to have found this site before the DeepSig dataset. I have however worked with many companies that based their research on this paper and dataset. None of them did well. I can attest there has wasted years of research and wasted millions of dollars.