Let’s take a look at a class of signal-presence detectors that exploit cyclostationarity and in doing so illustrate the good things that can happen with CSP whenever cochannel interference is present, or noise models deviate from simple additive white Gaussian noise (AWGN). I’m referring to the cycle detectors, the first CSP algorithms I ever studied.
Cycle detectors are signal-presence detectors. The basic problem of interest is a binary hypothesis-testing problem, typically formulated as
where is the signal to be detected, if present, and is white Gaussian noise. We’ll look at some variations on these hypotheses later in the post.
The idea is to construct a signal processor that operates on the received data to produce a decision about the presence or absence of the signal of interest (“signal to be detected”) . Such processors usually produce a real number that is generally much different on than it is on . The common case is that is relatively large on and relatively small on , but that isn’t required: could be consistently small on and large on .
A typical mathematical approach to this decision-making problem is to model the signals and so that their probabilistic structures are simple and easy to manipulate mathematically. This has lead to the very common model in which is a stationary random process that is statistically independent of the stationary random process , which is itself Gaussian and white (it is additive white Gaussian noise [AWGN]). Further simplifications can be had in some cases by assuming that the average power of is much smaller than that for , which is the weak-signal assumption (My Papers ) common in signal surveillance and cognitive-radio settings.
Of course, this is the CSP Blog, so we’ll be modeling the signals of interest as cyclostationary random processes, and by doing so we’ll be able to obtain detectors that are noise- and interference-tolerant.
Detectors for Stationary Signal Models
Throughout this post we are concerned with detecting signals on the basis of their gross statistical nature. This idea contrasts with another, often successful, approach that is based on exploiting some known segment of the waveform. For example, a signal may periodically transmit a known sequence (or one of a small number of known sequences) so that the intended receiver can estimate the propagation channel and compensate for it (equalization), or so that the receiver can perform low-cost reliable synchronization tasks. In this post, we assume the the signal to be detected has no such “known-signal components.” An unintended receiver can detect the signal of interest by performing matched filtering using these known components–but I’m saying that matched filtering is not applicable here due to the nature of the signal.
For a signal that is modeled as stationary, a gross statistical characteristic is its power spectral density (PSD) or its average power (the integral of the signal’s PSD). Detectors that attempt to decide between and on the basis of power or energy are called energy detectors or radiometers.
A simple energy detector is just the sum of the magnitude-squared values of the observed signal samples,
This detector does not take into account the distribution of the signal’s energy in the time or frequency domains—it’s just raw energy. It can be highly effective and has low computational cost, but it suffers greatly when the noise or the signal has time-varying behavior such as that caused by time-variant propagation channels, interference, or background noise. It can also suffer even when all of these elements are time-invariant but some of them are simply unknown or their presumed-known values are in error.
The energy and power of a signal are related by a scale factor that is equal to the temporal duration of the measurement ( above). That is, power is energy per unit time. So we can talk about energy detection or power detection, and they are pretty much the same thing. Another way to get at the power of the signal is to integrate the PSD,
where is an estimate of the signal PSD . If the signal is oversampled (relative to its Nyquist rate), then the PSD estimate will correspond to a frequency range that contains some noise-only intervals, typically the intervals near the edges. The power from those noise-only frequency intervals will be included in along with the power from the signal-plus-noise interval, which degrades the statistic in proportion to the amount of oversampling.
In contrast to the simple ED, the optimal energy detector (for a signal that is weak relative to the noise) weights the estimated PSD by the true one for , effectively de-emphasizing those noise-only intervals, and emphasizing those intervals throughout the signal’s band having the larger signal-to-noise ratios,
is sometimes called the optimum radiometer.
When the exact form of the PSD for is not known (perhaps the carrier frequency is only roughly known, or the pulse-shaping function is not known in advance), the ideal PSD can be replaced by the PSD estimate, forming the detection statistic
I call this detector the suboptimal energy detector (SED).
Detectors for Cyclostationary Signal Models (Cycle Detectors)
The various detectors obtained through mathematical derivation using a cyclostationary (rather than stationary) signal model are collectively referred to as cycle detectors. These detectors can be derived in a variety of ways. Perhaps the most familiar is through likelihood analysis, where a likelihood function is maximized. See The Literature ([R7], [R65]) and My Papers () for derivations.
The optimum weak-signal detector structure is called the optimum multicycle detector, and it is expressed as the sum of individual terms that contain correlation operations between measured and ideal spectral correlation functions,
So we sum up the complex-valued correlations between the measured and ideal spectral correlation functions for all cycle frequencies exhibited by . A single term from the optimum multicycle detector is the optimum single-cycle detector,
The suboptimal versions of the multicycle and single-cycle detectors replace the ideal spectral correlation function with the measured spectral correlation function, essentially measuring the energy in the measured spectral correlation function for one (single-cycle) or more (multicycle) cycle frequencies. So the suboptimal single-cycle detector is
However, the multicycle detector is more subtle. Even if we knew the formula for the ideal spectral correlation function for the modulation type possessed by , we’d still have a problem with the coherent sum in (6). The problem is that each term in the sum is a complex number whose phase depends on the phases of the values (over frequency ) of the estimated and ideal spectral correlation functions. These phases are sensitive to the symbol-clock phase and carrier phase of the signal. In other words, the derived detector structure uses the assumed synchronization (timing) parameters for the signal exactly as they appear in the hypothesis. If we use the proper form of the spectral correlation function, but the synchronization/timing parameters used in creating the ideal functions differ from those associated with the observed signal, the complex-valued terms in the multicycle sum can destructively–rather than constructively–add. This degrades the detector performance.
We’re in the unfortunate position of estimating timing parameters for a signal we have not yet detected.
So, the suboptimal version of the multicycle detector sums the magnitudes of the individual terms, rather than summing the complex-valued terms. This obviates the need for high-quality estimates of the synchronization parameters of the signal.
Finally, let’s consider the delay-and-multiply detectors. These are detectors that use a simple delay-and-multiply device to generate a sine wave. Then the presence or absence of the sine wave is detected by examining the power in a small band of frequencies centered at the frequency of the generated sine wave (The Literature [R66], My Papers ).
A delay-and-multiply (DM) detector can operate with a regenerated sine-wave frequency of zero, or with some other frequency that is dependent on the particular modulation type and modulation parameters employed by . For example, DSSS signals can be detected by using a quadratic nonlinearity (delay-and-multiply, say) to generate a sine wave with frequency equal to the chipping rate. Such a detector is called a chip-rate detector. For most signal types of interest to us here on the CSP blog, a delay of zero is a good choice, as it tends to maximize the strength of the generated sine wave.
Illustration Using Simulated Signals and Monte Carlo Simulations
We will illustrate the performance and capabilities of the various detector structures using a textbook BPSK signal so that we can control all aspects of the signal, noise, and detectors. The signal uses independent and identically distributed equi-probable symbols (bits) and a pulse-shaping function that is square-root raised-cosine with roll-off parameter of .
The BPSK signal has a symbol rate of (normalized units) and a carrier frequency of . So it is similar to our old friend the textbook rectangular-pulse BPSK signal, but with a more realistic pulse-shaping function.
Our BPSK signal has non-conjugate cycle frequencies of and conjugate cycle frequencies of , all for . The measured spectral correlation function is shown here:
Notes on Signal, Noise, and Interference Parameters
The various simulation results are meant to be qualitative in nature; a detailed parametric study is not the goal here; it is the understanding of the basic mechanisms and trends. When I allow the noise power to vary from its mean value, I allow only a deviation of at most dB. The reported inband SNRs on the graphs correspond to the mean value of the noise. Similarly, when I allow the power of the signal of interest to vary, I allow a deviation of at most dB from its baseline value, and when I vary the power of a cochannel (partial or fully spectrally overlapping) interferer, I allow a power deviation of at most dB. In this way, the “variable parameter” results subsume a lot of different signal scenarios.
The interferer is also a square-root raised-cosine BPSK signal, but I allow both its bit rate and carrier frequency to vary from trial to trial to create various degrees of spectral overlap with the signal of interest. This is consistent with an interferer with unknown prior parameters.
Let’s look at a few signal environment variations, and also introduce a pre-processing step called spectral whitening along the way.
In each simulation, I consider a wide range of inband signal-to-noise ratios (SNRs). By inband I mean that the SNR is the ratio of the signal power to the power of the noise in the signal bandwidth. This is typically a more meaningful SNR for CSP algorithms than the total SNR, which is simply the signal power divided by the noise power in the sampling bandwidth (the noise power in the entire analysis band). [To see why, consider what spectral correlation measures.]
For each set of simulation parameters (SNR, interference, etc.), I use Monte Carlo trials on each of and . The result of each trial is one detector output value for each simulated detector. I store these numbers, then analyze them to estimate the probabilities of detection and false alarm .
The detection probability is defined as
and the false-alarm probability is
where is the detection threshold. I won’t be talking in the post about how to choose a threshold. Many researchers and engineers want to plug into a formula that provides some kind of optimum threshold, balancing and , but in my experience such formulas are only possible in highly simplified problems, and must be adjusted using measurement. I suppose one could call them textbook thresholds.
Baseline Simulation: Constant-Power BPSK in Constant-Power AWGN
Here the BPSK signal has the same power on each trial (on ), and the additive white Gaussian noise has the same power on each trial (on both and ). The bits that drive the BPSK modulator are chosen independently for each trial, as is the noise sequence.
Let’s first look at histograms of the obtained detector output values. Here is a typical histogram, corresponding to an inband SNR of dB and a block length (observation interval length or processing length or data-record length, all the same thing) of about samples:
Here I am just showing three detectors. The first is the optimal energy detector (OED) described above; its statistics are shown in red. The second is the incoherent multicycle detector (IMCD), where the “incoherent” word just means that we add the magnitudes of the terms in the optimal MCD. The final detector shown here is the incoherent suboptimal multicycle detector (ISMD), which is what we described above as simply the suboptimal multicycle detector.
Notice that the distributions (histogram curves) for each detector are nearly separate for the two hypotheses and . This means good detection performance can be had by choosing a threshold anywhere in the gap between the two curves.
Exactly how does the performance depend on the selection of the threshold , especially when the two histograms for the detector output overlap? This is captured in the receiver operating characteristic (ROC), which plots versus . That is, each value of produces a pair . For the histograms above, here are the ROCs (for all the considered detectors in this post)
There are a few things to notice about this set of ROCs. First, the OED is the best-performing detector because its ROC is nearly a right angle at , meaning we can achieve a of nearly at a very small . Second, the IOMD (using all cycle frequencies except non-conjugate zero) is very nearly as good as the OED. Third, the detectors for the features related to the symbol rate for the OSD are similar, and are all better than those for the SSD, which themselves are similar. Finally, the DM for and the ISMD for cycle frequencies that are not exhibited by the data are the worst-performing.
So in this benign environment with a constant-power signal in constant-power noise, energy detection reigns supreme. If we look at the ROCs for several SNRs and a constant block length, we can extract useful graphs by fixing and plotting . Let’s fix at and see what is for the various detectors:
The performance ordering is maintained as the SNR increases: OED, IOMD, SDs, all the other SDs, then the DM, and finally the ISMD with false cycle frequencies. All is well except that last one. As the SNR increases, the value of for for the false-CF ISMD approaches one. So we are reliably detecting a signal that is not actually present!
Why is this? If we recall the post on the resolution product, we may remember that the variance of a spectral correlation estimator is inversely proportional to the time-frequency resolution product of the measurement, but it is also proportional to the ideal value of the noise spectral density . This just means that the variance of the measurement is affected by the measurement parameters as well as how much non-signal energy is present. We can always overcome high noise by increasing the resolution product.
In the case of using false cycle frequencies, the “noise” component on is the combination of our signal and the noise itself. So on , the value of our ISMD statistic is greater than it is on , just because there is more “noise” present on than on . We could confirm this by repeating the experiment where
where the spectral density for is greater than than for . (If you do the experiment, let me know.)
One way around this problem is to spectrally whiten the data prior to applying our detectors. Here, spectral whitening means applying a linear time-invariant filter to the data. The outcome of the filtering yields a signal whose spectral density is a constant over all frequencies in the sampling bandwidth. So, if a data block has a (measured) PSD of , then the transfer function for the whitening filter is given by
which follows from elementary random-process theory for the spectral densities of the input and output of a linear time-invariant system.
If we apply whitening to the data on a trial-by-trial basis, we obtain the following performance curves for the baseline case:
Now we see that the performance ordering has changed, and that the false-CF ISMD does not tell us a non-existent signal is present as the actual signal’s SNR increases. Spectral whitening is also useful when inband narrowband interference is present, for much the same reasons as we’ve outlined above.
The spectral whitening is not perfect. The OED begins to detect the signal as the SNR gets large due to this imperfection.
Finally, we note that the use of spectral whitening as a data pre-processing step means that the spectral correlation function estimates used in the various detectors are actually spectral coherence estimates. Coherence strikes again!
Variation: Constant-Power BPSK in Constant-Power AWGN with Variable-Power Interference
The interferer is QPSK and has variable (from trial to trial) carrier frequency, power, and symbol rate. It is present on both and . Moreover, the randomly chosen interferer carrier frequency is such that the two signals always spectrally overlap, so no linear time-invariant preprocessing step could separate the signals. A typical spectral correlation plot for the combination of the two signals is shown here:
Notice that the two signals cannot be distinguished in the PSD. Relative to the spectral correlation plot for BPSK alone, we see the additional non-conjugate feature that corresponds to the QPSK interferer.
The actual hypotheses for this variation can be expressed as
The QPSK interferer has random power level that is uniformly distributed in the range dB. The BPSK signal has a constant power of dB, so the interferer power ranges from a tenth of the BPSK power to ten times the BPSK power. The interferer’s center frequency is restricted to lie in an interval to the right of the BPSK center frequency. Finally, the interferer bandwidth ranges from one half the BPSK bandwidth to twice the BPSK bandwidth.
Here are some results for this variation, without the use of spectral whitening:
So here there is no need for spectral whitening, because the false-CF detectors will not generally show detection of a false signal. However, spectral whitening works out well in this kind of case, as we will see next.
Variation: Variable-Power BPSK in Variable-Power Noise and Interference
In this last variation for the textbook SRRC BPSK signal, the signal, interference, and noise all have variable power from trial to trial. Everything else is the same. Here are the results without whitening:
And now with spectral whitening applied to the data on each trial:
So, with or without spectral whitening, when the signal environment is difficult–contains variable cochannel interference and/or variable noise–the cycle detectors are vastly superior to energy detectors.
Illustration Using Collected Signals: WCDMA
I captured minutes of a local WCDMA signal using a (complex) sampling rate of MHz. For each trial in the WCDMA Monte Carlo simulations, I randomly choose a data segment from this long captured signal and add noise to it. A typical spectral correlation function plot for the WCDMA data is shown here:
The significant non-conjugate cycle frequencies are kHz, kHz, and MHz (the chip rate). There are no detected significant conjugate cycle frequencies for this data. Notice the frequency-selective channel implied by the WCDMA PSD, which is normally flat across its bandwidth. The observed channel fluctuates over time.
Baseline Experiment: WCDMA as Captured in Constant-Power AWGN
The block lengths for the WCDMA experiments are reported in terms of the number of DSSS chips, which have length , or micro-seconds. Here is the result for an inband SNR of dB and a block length of symbols or chips, and no spectral whitening:
So in this benign environment, energy detection is far superior to CSP detection, but the cycle detectors definitely work. We again observe the false-CF detection problem.
Variation: WCDMA as Captured in Constant-Power AWGN with Whitening
When spectral whitening is used, we obtain the following ROCs and probabilities:
In this case, the cycle detectors are superior by a few decibels compared to the OED. The SDs for the cycle frequency of kHz are rather strongly affected by the whitening relative to the other SDs and the MDs. I don’t yet have an explanation for that, but it is clear that the real-world (non textbook) signals are much more complicated than the textbook signals, and application of CSP to the non-textbook signals requires care.
Let me, and the CSP Blog readers, know if you’ve had good or bad experiences with cycle detectors by leaving a comment. And, as always, I would appreciate comments that point out any errors in the post.