In this post, I discuss a signal-processing algorithm that has almost nothing to do with cyclostationary signal processing (CSP). Almost. The topic is automated spectral segmentation, which I also call band-of-interest (BOI) detection. When attempting to perform automatic radio-frequency scene analysis (RFSA), we may be confronted with a data block that contains multiple signals in a number of distinct frequency subbands. Moreover, these signals may be turning on and off within the data block. To apply our cyclostationary signal processing tools effectively, we would like to isolate these signals in time and frequency to the greatest extent possible using linear time-invariant filtering (for separating in the frequency dimension) and time-gating (for separating in the time dimension). Then the isolated signal components can be processed serially using CSP.
It is very important to remember that even perfect spectral and temporal segmentation will not solve the cochannel-signal problem. It is perfectly possible that an isolated subband will contain more than one cochannel signal.
The basics of my BOI-detection approach are published in a 2007 conference paper (My Papers ). I’ll describe this basic approach, illustrate it with examples relevant to RFSA, and also provide a few extensions of interest, including one that relates to cyclostationary signal processing.
The efficacy of our cyclostationary signal processing (CSP) algorithms for any particular data record depends on several factors, including the noise power, interference power, processing block length, and signal type. In general, it is better to apply CSP algorithms to a signal that has been filtered so that adjacent-channel signals and out-of-band noise are minimized. This just means that we want to minimize the “total SINR” of the data prior to CSP processing. By “total SINR” I simply mean the ratio of the signal power to the power of everything else in the data record, whether it has frequency content in the signal’s band or not.
This aim isn’t so hard to achieve when the input data consists of a single signal in white Gaussian noise (WGN). Then we need only apply a linear time-invariant filter to the data, where the center frequency and bandwidth of the filter match the center frequency and bandwidth of the signal. For example, consider a QPSK signal in noise. The PSD looks like this:
If we apply a simple Fourier-based linear time-invariant filter to that QPSK data, where the center frequency is zero and the width is Hz, we remove almost all of the out-of-band noise and retain the signal:
Removing the out-of-band noise won’t help much with estimating the spectral correlation function for the QPSK signal itself, because that will depend on the spectral components of the data that do in fact lie in the signal band. On the other hand, the SCF for frequencies that lie well outside the signal’s band is ideally zero, and will be easier to estimate when the out-of-band noise is removed.
But what about the temporal parameters like the cyclic autocorrelation and the cyclic cumulants? Well, they will be more sensitive to the presence of the out-of-band noise, because each point estimate takes into account all values of the input data block, so it is important to remove as much of it as possible prior to applying the estimator for the time-domain parameter of interest.
So we have reasons for wanting to apply simple signal processing methods to simple RF scenes like the QPSK signal above: increasing the total SNR improves estimator performance for fixed estimator parameters such as block length, spectral resolution etc. And increasing the total SNR means effectively locating the signal in the frequency domain and then applying a bandpass filter to that location.
The problems for our estimators get worse, however, when the scenarios aren’t simple.
Densely Populated RF Bands
In RF scene analysis, the goal of the signal processing algorithm is to blindly detect, classify, and characterize all signals present in the scene. No matter what the scene contains. Here is a sequence of PSDs for scenes of interest, where the scene complexity–measured by things like the number of signals present, the SINRs, and the noise-floor deviation from ideal–is increasing:
By the time we get to the final few examples, we see collection-system filter roll-offs on the edges of the band, non-flat noise floors, high dynamic ranges in signal power and signal bandwidth, and closely spaced signals in frequency.
Distortion of Estimated Cumulants Due to Time-Gating
There is another reason to isolate a signal in the time and frequency domains prior to processing with CSP: distortion of sets of cyclic cumulants caused by partial signal temporal occupancy within a processed data block. (Awkwardly put, I know.)
Suppose we have a signal that exists only during a subset of the time instants corresponding to some sampled-data record. Let the subset occupy a fixed fraction of the data record, which itself spans the continuous-time interval ,
Here and is stationary white Gaussian noise. We use because it allows us to fix the fraction of time that the signal is on, relative to the interval , so we can talk sensibly about the values of estimators as . For example, if and , then and we have our usual case where the signal persists over the entire observation (integration) interval. On the other hand, if and , then and the signal persists only over half of the observation interval.
Now we know that the temporal cumulant function is a particular sum of products of lower-order temporal moment functions,
(See the post on cyclic cumulants for more information on this notation.)
Now the cumulant for is the sum of the cumulant for and , and the latter is zero for , so we focus on the cumulant for . We’ll need the various lower-order cyclic moments (Fourier components of the time-varying moment ),
Or, using and ,
In the limit as , we have
So the signal that persists only over a fraction of the observation interval is a scaled version of the signal that persists over the entire interval, and the scaling factor is quite reasonably .
This implies that the temporal moment function is also scaled by ,
Keeping in mind that for higher orders the cumulant for is that for , we have
Unlike the temporal moment, the temporal cumulant is not simply a scaled version of that fully persistent signal–it is warped or distorted by the inclusion of , which is dependent on the partition size unless (the case in which the signal is persistent over the whole block).
So, processing involving linear combinations of multiple orders of moments is insensitive to , but processing involving cumulants is not, because that involves different products of moments. Since my modulation-recognition algorithm is cumulant-based (see here, but more coming in a future post as well), this is an actual problem for me.
All of this implies that we need to include an interval-of-interest (IOI) estimator as well as a band-of-interest estimator in our spectral-segmentation algorithm. That way our calculations will always involve once we estimate the IOI and time-gate the signal component accordingly.
A few others have created automatic spectral segmentation algorithms, or the equivalent, such as automatic noise-floor estimators that work for dense (crowded) RF scenes. One is The Literature [R96], which applies morphological image-processing operators (like erosion and dilation) to spectral data. Others, like The Literature [R97] advocate sophisticated high-resolution/high-accuracy spectrum estimators such as the multi-tapering method of The Literature [R98]. I think we do not need high spectrum-estimate accuracy here; the goal is to estimate the frequency intervals for which non-noise energy exists, not to obtain a super accurate spectrum estimate for all frequencies in the analysis band. So I’m of the opinion that high-performance spectrum estimators are out of place in the general band-of-interest estimation problem.
My Solution: Multiresolution Adaptive Spectrum Estimation
I outline the solution in My Papers . The idea is to recursively apply a low-cost simple non-parametric power spectrum estimator to the input data record, which is assumed to contain complex samples and correspond to the complex envelope of some RF data (like the various spectra above).
Why “recursively?” In some cases, a wideband RF scene may contain several signals that are closely spaced in frequency. To see that there is a small gap between them requires that the spectral resolution of the spectrum estimator lies in a particular range. On the other hand, the scene may also (or only) contain a wideband signal with significant spectral variation across its bandwidth. In this latter case, there is only one band of interest, but the variability of a high-resolution spectrum estimate can make it appear that it is composed of multiple closely spaced bands of interest.
In other words, the required spectral resolution depends on the number and spacing of the signals in the scene, and we don’t want to make many assumptions about that.
What we do instead is to start out by applying a coarse spectral resolution to the data record, find the minima to estimate the noise floor, and declare everything else as bands of interest. Then, we apply the same algorithm to each of the detected BOIs, after filtering, frequency shifting, and decimating each one. This process will reveal which of the coarse lumps from the first spectrum estimate are really composed of multiple closely spaced BOIs and which are not.
I’ve found that this is a tricky idea to get across, so before we outline the mathematics of the algorithm, I want to further describe the multi-resolution idea with some pictures.
Consider the following multi-signal RF scene:
This scene contains individual signals, none of which overlap in frequency. This first PSD estimate for our -signal scene used the frequency-smoothing method and a smoothing-window width (spectral resolution) of Hz ( Hz in normalized frequency). With this resolution, our eyes can count bands of interest, and we suspect the ones near kHz may contain several more. But the resolution is too coarse to reveal them in this picture.
More generally, let’s look at a sequence of PSD estimates for this scenario:
In this sequence of PSD estimates, the spectral resolution is decreasing from top to bottom (the smoothing-window width in the FSM is increasing). The bands of interest are detected using only a single PSD estimate–no recursive processing is allowed. This shows that basic spectral segmentation algorithms fail when the PSD spectral resolution is not well-matched to the spectral variation exhibited by the underlying signals in the data.
The idea behind the automatic multi-resolution spectral segmentation algorithm is recursively applying a standard non-parametric PSD estimator (like the FSM), which I also call telescoping spectral estimation. For this -signal example, telescoping spectral segmentation is illustrated by the following diagram:
When used on our -signal example, we obtain the following sequence of results:
The indicated spectral resolution applies to the initial application of the non-parametric PSD estimator. Subsequent applications, as the telescoping action proceeds, use an effective resolution that has the same relation to the local sampling rate that the initial resolution has to the original sampling rate of kHz.
Here we see that the ability to correctly detect the bands of interest and to estimate the noise floor is only weakly dependent on the choice of spectral resolution.
We first provide a fixed-resolution algorithm that finds bands of interest by defining them as arbitrary-shaped bumps that rise up from the noise floor. The multi-resolution version will follow.
In this algorithm statement, the th BOI has endpoints given by .
- Obtain data samples , .
- Specify resolution (normalized frequency units).
- Let . This is the width of the smoothing window in frequency bins.
- Compute spectrum estimate with resolution product , denote by , .
- is the minimum of for .
- Denote by the noise-floor threshold .
- For each :
- Set .
- For each , if and , set . If and , set and increment by .
- If , then . (Handles BOI at left edge of PSD.)
- If , then . (Handles BOI at right edge of PSD.)
- Estimate noise floor by computing the average value of PSD for all frequency points not assigned to a BOI interval.
- Estimate power for BOI by summing PSD values for all frequency points assigned to BOI .
- Remove weak BOIs, resulting in BOIs.
- BOI intervals are , .
We build on this fixed-resolution algorithm to create the recursive multi-resolution algorithm in the following way:
- Obtain data samples , .
- Specify nominal resolution .
- Set , , and .
- Set .
- Set .
- Set .
- If , set and Goto Step 13.
- Extract the BOI data from that corresponds to . Bandpass filter, downconvert, and decimate this data so that its fractional bandwidth is maximized and it is centered at zero frequency. The resulting data is .
- Estimate the PSD for .
- Use the fixed-resolution BOI detection algorithm with to find BOIs in . Let the number of detected BOIs be .
- Assign detected-BOI values to , .
- If and , set , indicating the BOI is completed, else set for .
- If , then Goto Step 7.
- Set .
- If for any , Goto Step 6 else Stop.
When we apply this algorithm to the -BOI data file in a Monte Carlo simulation, we obtain the following performance graphs:
Here the total signal power is about dB, and the total power of the noise varies from dB to dB.
Let’s apply the algorithm to the RF scenes I showed in the first several figures of the post. In all cases, the number of processed samples is (regardless of the sampling rate) and the spectral resolution at each stage of recursion is times the effective sampling frequency at that stage.
Several of the RF scenes above possess noise floors that are not flat throughout the analysis bandwidth. In particular, they have noise floors that decrease rather rapidly at the edges of the band. This is typically due to the action of a receiving system, which applies filtering prior to sampling. The applied filter bandwidth is smaller than the complex sampling rate, so that the band edges end up containing little energy. The noise is therefore not white (relative to the sampling rate), and so it is typically called colored. There are other deviations from white noise as well, such as tilted noise floors or those that contain ripples of various sorts. The most common is the filter-roll-off band-edge coloration, and the band-of-interest detector can handle that at the expense of increased computational cost.
The idea is to apply the basic algorithm to the data but restrict it to operate on a subband such as . Keep a count of the number of bands that are detected for each of several values of . Retain only those bands that correspond to the maximum. When , the stated algorithm above will return a single band of interest, which is nearly the entire analysis band, because it will identify the noise floor as the value of the spectrum estimate very near . In other words, the entire spectrum looks like one big band of interest having a very small noise floor. On the other hand, when is such that only the flat part of the noise floor is considered, it will operate effectively and return all the true bands of interest that actually exist in the flat part, which is greater than or equal to one.
To perform the localization of signal components in the time domain, which I have called interval-of-interest estimation, I simply apply the algorithm developed for the band-of-interest estimation to the time-domain data itself. This is done for each detected band of interest separately. Interval-of-interest estimation is just the time-frequency dual algorithm to band-of-interest estimation.
Multi-Resolution Spectral Correlation Estimation for Targeted-Signal BOI Detection
The algorithm as discussed so far employs only power spectrum estimation–no cyclostationarity! A CSP extension is to replace the spectrum estimates with spectral correlation estimates. In fact, the entire algorithm above can be viewed as exploiting non-conjugate spectral correlation at a cycle frequency of . Why not choose a different value for ? Such as one that corresponds to a cycle frequency for a signal that is being searched for? Then all bands that do not possess the desired cyclostationarity are ignored and are not processed, potentially saving significant computation.
There are a few complications, but the algorithm works for non-zero . Let’s apply it to each of the scenes above, tailoring the choice for according to the signals that we know exist in the scenes.
First, the simple single-signal scene involving QPSK. It has a symbol rate of , so we use that as the cycle frequency:
Next up is the hybrid FH/DSSS signal, which possesses many cycle frequencies, but a prominent one is . Using this cycle frequency results in the following detection:
Notice here there is a single detected band of interest rather than the two distinct bands produced by the main algorithm.
Next is the scene with the narrowband PSK/QAM signals. The leftmost signal in this scene is BPSK, and the fourth, seventh, etc. signals are also BPSK. The second, fifth, eighth, etc. are QPSK, and the remainder are SQPSK. The bandwidth of the signals is monotonically decreasing as the signal index moves from left to right, so that every signal has a unique symbol rate, which means a unique cycle frequency. We choose the bit rate of the third BPSK signal, the seventh signal from the left, which is . The result is:
Now we turn to the scene containing an FH signal with intermittent wideband interferers. We choose the symbol rate of one of the interferers (situated near Hz), which is :
Next is the scene with a large number of signals of mixed bandwidth. We choose the symbol rate for the two signals between and , which is , and this rate is not a cycle frequency for any of the other signals:
Turning now to captured data, we consider the PCS scene and choose the chip rate for CDMA, which is 1.2288 MHz. This cycle frequency is exhibited by the CDMA signals at and MHz, as well as by the CDMA-EVDO signal at MHz:
When we switch the cycle frequency to the WCDMA chip rate of MHz, we obtain the following bands of interest:
which makes sense because the two signals near MHz are indeed WCDMA.
The broadcast FM signals can exhibit a cycle frequency of kHz due to their digital components:
As usual, if you find any errors in the post, please let me know in the comments. Questions are welcome, as always. I’m also very interested in knowing about any papers in the literature that present solutions to this general spectral segmentation problem.