What is the Minimum Effort Required to Find ‘Related Work?’: Comments on Some Spectrum-Sensing Literature by N. West [R176] and T. Yucek [R178]

Starts as a personal gripe, but ends with weird stuff from the literature.

During my poking around on arxiv.org the other day (Grrrrr…), I came across some postings by O’Shea et al I’d not seen before, including The Literature [R176]: “Wideband Signal Localization and Spectral Segmentation.”

Huh, I thought, they are probably trying to train a neural network to do automatic spectral segmentation that is superior to my published algorithm (My Papers [32]). Yeah, no. I mean yes to a machine, no to nods to me. Let’s take a look.

The problem as described is straightforward: find the time and frequency edges of a modulated RF signal. This problem is analogous to image segmentation, and when a spectrogram is used to develop a solution, the analogy can be very strong indeed–image processing is directly relevant. When we are using a relatively short data sequence to perform the task, the appeal of a spectrogram is lessened, but the analogy to image segmentation is still strong. For this reason I describe my algorithm as an automatic spectral segmentation algorithm, although it also performs the dual operation of temporal segmentation.

Often the obtained segments in frequency, which are just frequency intervals, are called bands of interest (BOIs), and the obtained segments in time, which are just time intervals, are called intervals of interest (IOIs). But terminology varies.

My paper on BOI estimation was published in 2007 and the corresponding CSP Blog post in 2017. Neither are mentioned in The Literature [R176], although the algorithm is directly relevant. When I developed the algorithm, I had looked quite hard for related work–what did others do about the general problem of RF-signal localization in time and frequency? I found several relevant papers and these are cited in My Papers [32]. But they didn’t do what I wanted, so I invented an alternate approach that provided the kind of BOI and IOI estimation that was maximally useful in the context of CSP. Most notably, typical BOI-estimation approaches use a fixed spectral resolution (including [R176]), but this is highly problematic when doing general-purpose RF scene analysis because you might have a wideband scene that contains, simultaneously, multiple closely spaced narrowband signals and several spectrally isolated wideband signals. A large spectral resolution forces a low-variance high-bias spectrum estimate, which favors the isolated wideband signals, but lumps the closely spaced narrowband signals together. What is needed is a data-adaptive multi-resolution approach, and that is what is in My Papers [32] and the CSP Blog post.

When West et al started thinking about spectral segmentation, I wonder if they looked around for algorithms that might either do the job or that would be fair to compare theirs with. How hard is it to find my paper and my CSP Blog post? Not hard at all.

Google Searches for Key Words Are Easy and Effective

Since the term ‘spectral segmentation’ appears in the title of [R176], it is natural for the authors to perform literature searches using that term (and also other terms of course). When one uses Google to search for “spectral segmentation,” the first ten or so returned links all relate to image processing (image segmentation). The first non-image-processing link is the CSP Blog post on spectral segmentation, as shown in Figure 1. The reference to the published paper My Papers [32] is easily found in that post, as well as a mathematical description of the algorithm from [32] and lots of performance examples.

Figure 1. The first non-image-processing Google link for a search term of “spectral segmentation” is a CSP Blog post. That post is a recapitulation and extension of My Papers [32], which is directly relevant to The Literature [R176].

If one searches for “automatic spectral segmentation,” the first result is the CSP Blog post.

In the cognitive radio literature, the idea of segmenting a spectral band into subbands that are occupied (‘black spaces’) and those that are not occupied (‘white spaces’) is central to research and practice. The goal is reliable white-space detection, because if you find a truly unoccupied subband, your cognitive radio can transmit in that band. Finding white spaces is the complementary problem to finding black spaces, and finding black spaces is spectral segmentation as defined by West et al. So searching the cognitive radio literature for spectrum-sensing and white-space detection algorithms, implementations, and performance analyses is a necessary element of finding relevant related work.

A Google search for “white space detection” brings up My Papers [32] as the second link, as shown in Figure 2.

Figure 2. Results of a Google search for the term “white space detection.”

Yes, it is the Same Problem

Just to make sure that you realize the problem under study by West et al is the same as the problem in My Papers [32] and the automatic spectral segmentation post, here are some excerpts from the paper describing the problem.

Figure 3. I agree there is relatively little published work on this problem, but there is some, and it is easy to find. If you look.
Figure 4. Sounds good. I call it band-of-interest and interval-of-interest detection, but signal localization is good too–probably better.

West et al appear happy to make the assumptions about the nature of signal localization (spectral/temporal segmentation) that I explicitly reject in My Papers [32]: Fixed spectral and temporal resolution (see Figure 5).

Figure 5. The explicit assumption that one must use a constant frequency resolution in a signal-localization technique. One does not. And in fact enforcing this assumption leads to significant problems in a general spectral-segmentation setting. See My Papers [32].

The problems with fixed spectral resolution are that (1) the spectral edges of a signal are difficult to estimate with accuracy better than approximately the chosen spectral resolution \Delta f, and (2) closely spaced but spectrally distinct signals are much more likely to be lumped together in a single spectral interval. That is why I developed a multi-resolution data-adaptive method. This method allows me to find the spectral intervals (bands of interest) for a large number of signals in a single RF scene independently of the distribution of their power levels, bandwidths, spectral shapes, and (crucially) their spectral separations. For example, Figure 20 from the automatic spectral segmentation post is reproduced here as Figure 6.

Figure 6. This is Figure 20 from the post on my automatic spectral segmentation algorithm, which is published in My Papers [32]. Go to the post to see many more examples and a thorough explanation of the algorithm.

Moreover, the algorithm is easily extended to handle the very common situation in which the RF scene data is obtained from a radio receiver with significant filter-rolloff shoulders on either side of the spectrum, as illustrated with a captured FM-broadcast scene in Figure 7.

Figure 7. This is Figure 22 from the automatic spectral segmentation post.

Phantom Dataset

The paper The Literature [R176] mentions the possibility of a publicly available dataset related to the authors’ work on spectral segmentation. See Figure 8. However, I have tried to access this address multiple times over weeks, and it does not exist. See Figures 9 and 10.

Figure 8. A URL for a spectral-segmentation dataset in [R176]. Unfortunately this URL is broken.
Figure 9. The URL does not work.
Figure 10. The website doesn’t even seem to exist.

Weirdness I Can’t Let Go Without Comment

While reading through [R176], I came across a figure intended to situate various signal-processing and machine-learning approaches to ‘spectrum sensing.’ I reproduce it here as Figure 11.

Figure 11. Figure 1 from [R176]. Note the careful placement of the red oval. Not only are data-derived ML sensing models more accurate than any of the other spectrum-sensing methods embodied by the other ovals (which is pretty much all of those in existence), they are in some cases less complex than the energy detector and most of them have complexities comparable to the energy detector. Good to know!

While staring at Figure 11, I figured I knew what “Cyclostationary” was (even though ‘cyclostationary’ is just an adjective), and that I knew what “Match Filtering” was (matched filtering), and I’m pretty sure I know “Energy Detector.” Data-derived ML sensing methods are the focus of the current gold rush in signal processing, but what are “Waveform-based Sensing” and “Radio Identification?”

So I did a Google search to search for related work. What I found, quickly, was a paper I’d skimmed in the past by Yucek and Arslan (The Literature [R178]). Figure 4 from that paper, which is referenced in [R176], but not in the context of Figure 1 from [R176], is reproduced here in Figure 12.

Figure 12. This is Figure 4 from The Literature [R178], which predates [R176] by over ten years.

Figure 1 in [R176] is accompanied by the text “Figure 1 shows a trade space of these approaches with our perception of accuracy and complexity.” (Emphasis added.)

Helpfully, Yucek and Arslan define what they mean by the terms in the ovals ([R176] does not.)

Waveform-based Sensing is matched filtering, where the filter is matched to a segment of a transmitted radio signal that is knowable, such as a preamble or midamble in a framed signal (e.g., GSM), or a periodically repeated synchronization sequence (e.g., ATSC-DTV). Match filtering is matched filtering, where it looks like the authors enforce the idea that the entire waveform is known (remarkably unrealistic and inapplicable except in radar, where it is called pulse compression). Matched filtering is impossible to apply when one does not have a repeated known segment of the waveform to work with, and is highly vulnerable to synchronization problems such as a residual carrier offset frequency (as the authors in [R178] allude to) as well as to interference. But, clearly, Waveform-based Sensing and Match Filtering should have overlapping, if not identical, ovals in the figure.

Radio Identification is feature-based processing of various ad hoc sorts (not a criticism). The authors spend most of their time talking about energy detection and CSP in the Radio Identification context, so it clearly cannot have an oval that does not substantially overlap those for Energy Detector and Cyclostationary.

These kinds of figures are very difficult to construct. One of the reasons is that one wishes to use simple labels, but simple labels such as Cyclostationary encompass a wide range of algorithms with a huge variation in computational cost and applicability, not to mention accuracy. That is, the problem of spectrum sensing (or modulation classification, modulation recognition, automatic signal classification, RF scene analysis, etc.) is a highly multidimensional problem. Reducing it to two dimensions risks extremely misleading characterizations. And that is what we see in [R178], which makes its way into [R176] unchanged except with an addition that claims ML methods blow everything else out of the water. We know that isn’t true, but it is taking researchers from outside the gold rush to document it.

Nevertheless, I’ll persist and offer a diagram of my own in Figure 13. I’m ignoring Radio Identification and Waveform-based Sensing, because those things, apparently, are pretty much special cases of some of the other, more general, bubbles.

Where is machine-learning in Figure 13? Hard to say, because the cost and performance for a Data-Derived ML Sensing Model depends on the nature of the training dataset as well as the neural-network structure and parameters. The Multiple-Signal Cyclic Cumulant Analyzer is applicable to any problem involving any combination of cyclostationary signals in any noise. It may be very hard to construct it so that it actually works for literally any such combination of signals, but that is its inherent nature. The nature of a trained neural network depends heavily on the training dataset–there is no general technique. If your training set consists of examples of two cochannel signals, then there is a chance the network can recognize inputs consisting of two cochannel signals. But it will fail when there is a single signal present, because there is no correct label to choose from. Etc.

Figure 13. A slightly updated diagram I drew a couple years back. I recently added the Matched Filters bubble. Where are the machine-learning bubbles? Anywhere you’d like?

There are three basic lessons from Figure 13. First, CSP is generally expensive. Second, blind algorithms are usually more expensive than non-blind algorithms. This is because the blind methods incorporate one or more searches. Third, there aren’t many options when you need to deal with cochannel signals. I’m sure there are contradictions or errors in the arrangement of the bubbles in Figure 13–like I said, drawing these figures is hard.

There are other bad statements in [R178], but one thing that stands out is Figure 3, reproduced here as Figure 14.

Figure 14. Are these really receiver operating characteristics?

The definitions of the probability of detection, P_D, and the probability of false alarm, P_{F}, are correct in the paper. But this plot of the (P_D, P_F) pairs for simple energy detection? Not so much. For the three SNRs considered, you can find thresholds such that P_D=1 and P_F = 0. Moreover, you can select a threshold such that P_D is as small as you’d like and P_F approaches one.

In my post on the cycle detectors, I show both histograms of detection statistics and receiver operating characteristics. Some of the former are reproduced here in Figure 15. Of particular relevance to Yucek [R178] are the optimal energy detector (OED) histograms shown in red. The corresponding receiver operating characteristics are shown in Figure 16 (along with some other detectors). These are typical shapes of receiver operating characteristics. They start at the origin (P_D, P_F) = (0,0) and proceed to the point (1,1). For very small thresholds, the probabilities that the detector output exceeds the threshold are one for both signal-present and signal-absent hypotheses. For very large thresholds, the probabilities are both zero–the detector never outputs a large value independently of whether the signal is present or absent.

The Yucek characteristics don’t conform to these basic probability results.

Figure 15. This is Figure 2 from the cycle detectors post. It shows the histograms for a detection statistic for three distinct detectors on each detection hypothesis (H_1 means the signal is present, H_0 means the signal is absent). The energy detector corresponds to the red curves. Note that the distribution of the detection statistic on H_1 is to the right of that for H_0. This is the case for all energy detectors, and all other detectors I know of. It is logically possible to have a good detector for which the statistic is reliably smaller on H_1 than on H_0, but it doesn’t happen in practice.
Figure 16. Receiver operating characteristics for the histograms shown in Figure 15, and for several other detectors not shown in Figure 15. (See the cycle detectors post for more.)

Comments and corrections are welcome below.

Author: Chad Spooner

I'm a signal processing researcher specializing in cyclostationary signal processing (CSP) for communication signals. I hope to use this blog to help others with their cyclo-projects and to learn more about how CSP is being used and extended worldwide.

3 thoughts on “What is the Minimum Effort Required to Find ‘Related Work?’: Comments on Some Spectrum-Sensing Literature by N. West [R176] and T. Yucek [R178]”

  1. Looks like for [R176] “Wideband Signal Localization with Spectral Segmentation” at there is a comment stating “arXiv admin note: substantial text overlap with arXiv:2110.00518”. The title to the similar arXiv entry is “A Wideband Signal Recognition Dataset” which seems to have been submitted/accepted to 2021 IEEE 22nd International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)
    https://ieeexplore.ieee.org/document/9593265.

    Anyways, in the SPAWC version of this paper, they first claim their dataset is located here:
    https://opendata.deepsig.io/datasets/SPAWC.2021/spawc21_wideband_dataset.zip

    This link for their dataset also does not appear to work. However, later in the same paper, they state the dataset is located here:
    https://eval.ai/web/challenges/challenge-page/1057/overview

    This link does appear to be valid, but ultimately points to reader to this link:
    https://github.com/gnuradio/sigmf
    (which also seems to be valid).

    I have not yet tried to figure out how to most appropriately download the dataset (or if the signal dataset indeed exists in the github repository). If someone beats me to it, feel free to report if you got anywhere.

    Cheers!

      1. Well, I wouldn’t have thought to look for this without your post, Chad, so thank you for the post!

        But alas, it turns out the sigmf link https://github.com/gnuradio/sigmf simply provides the details for how they formatted the dataset (which is nice) but does not actually contain the dataset. The second link in their paper https://eval.ai/web/challenges/challenge-page/1057/overview ultimately has a circular link back to the first link in their paper https://opendata.deepsig.io/datasets/SPAWC.2021/spawc21_wideband_dataset.zip which doesn’t work.

        ¯\_(ツ)_/¯

        If anyone is able to obtain the dataset, feel free to let us know.

Leave a Comment, Ask a Question, or Point out an Error

%d bloggers like this: