# The Cycle Detectors

CSP shines when the problem involves strong noise or cochannel interference. Here we look at CSP-based signal-presence detection as a function of SNR and SIR.

Let’s take a look at a class of signal-presence detectors that exploit cyclostationarity and in doing so illustrate the good things that can happen with CSP whenever cochannel interference is present, or noise models deviate from simple additive white Gaussian noise (AWGN). I’m referring to the cycle detectors, the first CSP algorithms I ever studied.

Cycle detectors are signal-presence detectors. The basic problem of interest is a binary hypothesis-testing problem, typically formulated as

$\displaystyle H_1: x(t) = s(t) + w(t), \hfill (1)$

$\displaystyle H_0: x(t) = w(t), \hfill (2)$

where $s(t)$ is the signal to be detected, if present, and $w(t)$ is white Gaussian noise. We’ll look at some variations on these hypotheses later in the post.

The idea is to construct a signal processor that operates on the received data $x(t)$ to produce a decision about the presence or absence of the signal of interest (“signal to be detected”) $s(t)$. Such processors usually produce a real number $Y$ that is generally much different on $H_1$ than it is on $H_0$. The common case is that $Y$ is relatively large on $H_1$ and relatively small on $H_0$, but that isn’t required: $Y$ could be consistently small on $H_1$ and large on $H_0$.

A typical mathematical approach to this decision-making problem is to model the signals $s(t)$ and $w(t)$ so that their probabilistic structures are simple and easy to manipulate mathematically. This has lead to the very common model in which $s(t)$ is a stationary random process that is statistically independent of the stationary random process $w(t)$, which is itself Gaussian and white (it is additive white Gaussian noise [AWGN]). Further simplifications can be had in some cases by assuming that the average power of $s(t)$ is much smaller than that for $w(t)$, which is the weak-signal assumption (My Papers [4]) common in signal surveillance and cognitive-radio settings.

Of course, this is the CSP Blog, so we’ll be modeling the signals of interest as cyclostationary cycloergodic random processes (really just as cyclostationary time-series), and by doing so we’ll be able to obtain detectors that are noise- and interference-tolerant.

### Detectors for Stationary Signal Models

Throughout this post we are concerned with detecting signals on the basis of their gross statistical nature. This idea contrasts with another, often successful, approach that is based on exploiting some known segment of the waveform. For example, a signal may periodically transmit a known sequence (or one of a small number of known sequences) so that the intended receiver can estimate the propagation channel and compensate for it (equalization), or so that the receiver can perform low-cost reliable synchronization tasks. In this post, we assume the signal to be detected has no such “known-signal components.” An unintended receiver can detect the signal of interest by performing matched filtering using these known components–but I’m saying that matched filtering is not applicable here due to the nature of the signal.

For a signal that is modeled as stationary, a gross statistical characteristic is its power spectral density (PSD) or its average power (the integral of the signal’s PSD). Detectors that attempt to decide between $H_1$ and $H_0$ on the basis of power or energy are called energy detectors or radiometers.

A simple energy detector is just the sum of the magnitude-squared values of the observed signal samples,

$\displaystyle Y_{ED} = \sum_{j=1}^N \left| x(j) \right|^2. \hfill (3)$

This detector does not take into account the distribution of the signal’s energy in the time or frequency domains—it’s just raw energy. It can be highly effective and has low computational cost, but it suffers greatly when the noise or the signal has time-varying behavior such as that caused by time-variant propagation channels, interference, or background noise. It can also suffer even when all of these elements are time-invariant but some of them are simply unknown or their presumed-known values are in error.

The energy and power of a signal are related by a scale factor that is equal to the temporal duration of the measurement ($N$ above). That is, power is energy per unit time. So we can talk about energy detection or power detection, and they are pretty much the same thing. Another way to get at the power of the signal is to integrate the PSD,

$\displaystyle Y_{ED} = \int \hat{S}_x^0(f) \, df, \hfill (4)$

where $\hat{S}_x^0(f)$ is an estimate of the signal PSD $S_x^0(f)$. If the signal is oversampled (relative to its Nyquist rate), then the PSD estimate will correspond to a frequency range that contains some noise-only intervals, typically the intervals near the edges. The power from those noise-only frequency intervals will be included in $Y_{ED}$ along with the power from the signal-plus-noise interval, which degrades the statistic in proportion to the amount of oversampling.

In contrast to the simple ED, the optimal energy detector (for a signal that is weak relative to the noise) weights the estimated PSD by the true one for $s(t)$, effectively de-emphasizing those noise-only intervals, and emphasizing those intervals throughout the signal’s band having the larger signal-to-noise ratios,

$\displaystyle Y_{OED} = \int \hat{S}_x^0(f) S_s^0(f) \, df. \hfill (5)$

$Y_{OED}$ is sometimes called the optimum radiometer.

When the exact form of the PSD for $s(t)$ is not known (perhaps the carrier frequency is only roughly known, or the pulse-shaping function is not known in advance), the ideal PSD $S_x^0(f)$ can be replaced by the PSD estimate, forming the detection statistic

$\displaystyle Y_{SED} = \int \hat{S}_x^0(f)^2 \, df. \hfill (6)$

I call this detector the suboptimal energy detector (SED).

### Detectors for Cyclostationary Signal Models (Cycle Detectors)

The various detectors obtained through mathematical derivation using a cyclostationary (rather than stationary) signal model are collectively referred to as cycle detectors. These detectors can be derived in a variety of ways. Perhaps the most familiar is through likelihood analysis, where a likelihood function is maximized. See The Literature ([R7], [R65]) and My Papers ([4]) for derivations.

The optimum weak-signal detector structure is called the optimum multicycle detector, and it is expressed as the sum of individual terms that contain correlation operations between measured and ideal spectral correlation functions,

$\displaystyle Y_{OMD} \propto \Re \sum_\alpha \int \hat{S}_x^\alpha (f) S_s^\alpha(f)^* \, df. \hfill (7)$

So we sum up the complex-valued correlations between the measured and ideal spectral correlation functions for all cycle frequencies $\alpha$ exhibited by $s(t)$. A single term from the optimum multicycle detector is the optimum single-cycle detector,

$\displaystyle Y_{OSD} \propto \left| \int \hat{S}_x^\alpha (f) S_s^\alpha(f)^* \, df. \right| \hfill (8)$

The suboptimal versions of the multicycle and single-cycle detectors replace the ideal spectral correlation function with the measured spectral correlation function, essentially measuring the energy in the measured spectral correlation function for one (single-cycle) or more (multicycle) cycle frequencies. So the suboptimal single-cycle detector is

$\displaystyle Y_{SSD} \propto \int \left| \hat{S}_x^\alpha(f) \right|^2 \, df.\hfill (9)$

However, the multicycle detector is more subtle. Even if we knew the formula for the  ideal spectral correlation function for the modulation type possessed by $s(t)$, we’d still have a problem with the coherent sum in (7). The problem is that each term in the sum is a complex number whose phase depends on the phases of the values (over frequency $f$) of the estimated and ideal spectral correlation functions. These phases are sensitive to the symbol-clock phase and carrier phase of the signal. In other words, the derived detector structure uses the assumed synchronization (timing) parameters for the signal $s(t)$ exactly as they appear in the $H_1$ hypothesis. If we use the proper form of the spectral correlation function, but the synchronization/timing parameters used in creating the ideal functions differ from those associated with the observed signal, the complex-valued terms in the multicycle sum can destructively–rather than constructively–add. This degrades the detector performance.

We’re in the unfortunate position of estimating timing parameters for a signal we have not yet detected.

So, the suboptimal version of the multicycle detector sums the magnitudes of the individual terms, rather than summing the complex-valued terms. This obviates the need for high-quality estimates of the synchronization parameters of the signal.

Finally, let’s consider the delay-and-multiply detectors. These are detectors that use a simple delay-and-multiply device to generate a sine wave. Then the presence or absence of the sine wave is detected by examining the power in a small band of frequencies centered at the frequency of the generated sine wave (The Literature [R66], My Papers [3]).

A delay-and-multiply (DM) detector can operate with a regenerated sine-wave frequency of zero, or with some other frequency that is dependent on the particular modulation type and modulation parameters employed by $s(t)$. For example, DSSS signals can be detected by using a quadratic nonlinearity (delay-and-multiply, say) to generate a sine wave with frequency equal to the chipping rate. Such a detector is called a chip-rate detector. For most signal types of interest to us here on the CSP blog, a delay of zero is a good choice, as it tends to maximize the strength of the generated sine wave.

### Illustration Using Simulated Signals and Monte Carlo Simulations

We will illustrate the performance and capabilities of the various detector structures using a textbook BPSK signal so that we can control all aspects of the signal, noise, and detectors. The signal uses independent and identically distributed equi-probable symbols (bits) and a pulse-shaping function that is square-root raised-cosine with roll-off parameter of $1.0$.

The BPSK signal has a symbol rate of $f_{sym} = 1/T_0 = 1/10$ (normalized units) and a carrier frequency of $f_c = 0.05$. So it is similar to our old friend the textbook rectangular-pulse BPSK signal, but with a more realistic pulse-shaping function.

Our BPSK signal has non-conjugate cycle frequencies of $\alpha = k/10 = kf_{sym}$ and conjugate cycle frequencies of $\alpha = 2f_c + kf_{sym} = 0.1 + k/10$, all for $k \in \{-1, 0, 1\}$. The measured spectral correlation function is shown here:

### Notes on Signal, Noise, and Interference Parameters

The various simulation results are meant to be qualitative in nature; a detailed parametric study is not the goal here; it is the understanding of the basic mechanisms and trends. When I allow the noise power to vary from its mean value, I allow only a deviation of at most $\pm 1$ dB. The reported inband SNRs on the graphs correspond to the mean value of the noise. Similarly, when I allow the power of the signal of interest to vary, I allow a deviation of at most $\pm 3$ dB from its baseline value, and when I vary the power of a cochannel (partial or fully spectrally overlapping) interferer, I allow a power deviation of at most $\pm 10$ dB. In this way, the “variable parameter” results subsume a lot of different signal scenarios.

The interferer is also a square-root raised-cosine BPSK signal, but I allow both its bit rate and carrier frequency to vary from trial to trial to create various degrees of spectral overlap with the signal of interest. This is consistent with an interferer with unknown prior parameters.

Let’s look at a few signal environment variations, and also introduce a pre-processing step called spectral whitening along the way.

In each simulation, I consider a wide range of inband signal-to-noise ratios (SNRs). By inband I mean that the SNR is the ratio of the signal power to the power of the noise in the signal bandwidth. This is typically a more meaningful SNR for CSP algorithms than the total SNR, which is simply the signal power divided by the noise power in the sampling bandwidth (the noise power in the entire analysis band). [To see why, consider what spectral correlation measures.]

For each set of simulation parameters (SNR, interference, etc.), I use $1000$ Monte Carlo trials on each of $H_1$ and $H_0$. The result of each trial is one detector output value for each simulated detector. I store these numbers, then analyze them to estimate the probabilities of detection $P_D$ and false alarm $P_{FA}$.

The detection probability is defined as

$\displaystyle P_D = \mbox{\rm Prob} \left[ Y > \eta | H_1 \right], \hfill (10)$

and the false-alarm probability is

$\displaystyle P_{FA} = \mbox{\rm Prob} \left[ Y > \eta | H_0 \right], \hfill (11)$

where $\eta$ is the detection threshold. I won’t be talking in the post about how to choose a threshold. Many researchers and engineers want to plug into a formula that provides some kind of optimum threshold, balancing $P_D$ and $P_{FA}$, but in my experience such formulas are only possible in highly simplified problems, and must be adjusted using measurement. I suppose one could call them textbook thresholds.

#### Baseline Simulation: Constant-Power BPSK in Constant-Power AWGN

Here the BPSK signal has the same power on each trial (on $H_1$), and the additive white Gaussian noise has the same power on each trial (on both $H_1$ and $H_0$). The bits that drive the BPSK modulator are chosen independently for each trial, as is the noise sequence.

Let’s first look at histograms of the obtained detector output values. Here is a typical histogram, corresponding to an inband SNR of $-11$ dB and a block length (observation interval length or processing length or data-record length, all the same thing) of about $1640$ samples:

Here I am just showing three detectors. The first is the optimal energy detector (OED) described above; its statistics are shown in red. The second is the incoherent multicycle detector (IMCD), where the “incoherent” word just means that we add the magnitudes of the terms in the optimal MCD. The final detector shown here is the incoherent suboptimal multicycle detector (ISMD), which is what we described above as simply the suboptimal multicycle detector.

Notice that the distributions (histogram curves) for each detector are nearly separate for the two hypotheses $H_1$ and $H_0$. This means good detection performance can be had by choosing a threshold $\eta$ anywhere in the gap between the two curves.

Exactly how does the performance depend on the selection of the threshold $\eta$, especially when the two histograms for the detector output overlap? This is captured in the receiver operating characteristic (ROC), which plots $P_D$ versus $P_{FA}$. That is, each value of $\eta$ produces a pair $(P_D(\eta), P_{FA}(\eta))$. For the histograms above, here are the ROCs (for all the considered detectors in this post)

There are a few things to notice about this set of ROCs. First, the OED is the best-performing detector because its ROC is nearly a right angle at $(0, 1)$, meaning we can achieve a $P_D$ of nearly $1$ at a very small $P_{FA}$. Second, the IOMD (using all cycle frequencies except non-conjugate zero) is very nearly as good as the OED. Third, the detectors for the features related to the symbol rate $1/T_0$ for the OSD are similar, and are all better than those for the SSD, which themselves are similar. Finally, the DM for $\alpha = 0$ and the ISMD for cycle frequencies that are not exhibited by the data are the worst-performing.

So in this benign environment with a constant-power signal in constant-power noise, energy detection reigns supreme. If we look at the ROCs for several SNRs and a constant block length, we can extract useful graphs by fixing $P_{FA}$ and plotting $P_D$. Let’s fix $P_{FA}$ at $0.05$ and see what $P_D$ is for the various detectors:

The performance ordering is maintained as the SNR increases: OED, IOMD, $2f_c$ SDs, all the other SDs, then the DM, and finally the ISMD with false cycle frequencies. All is well except that last one. As the SNR increases, the value of $P_D$ for $P_{FA} = 0.05$ for the false-CF ISMD approaches one. So we are reliably detecting a signal that is not actually present!

Why is this? If we recall the post on the resolution product, we may remember that the variance of a spectral correlation estimator is inversely proportional to the time-frequency resolution product $\Delta t \ \Delta f$ of the measurement, but it is also proportional to the ideal value of the noise spectral density $N_0$. This just means that the variance of the measurement is affected by the measurement parameters as well as how much non-signal energy is present. We can always overcome high noise by increasing the resolution product.

In the case of using false cycle frequencies, the “noise” component on $H_1$ is the combination of our signal $s(t)$ and the noise itself. So on $H_1$, the value of our ISMD statistic is greater than it is on $H_0$, just because there is more “noise” present on $H_1$ than on $H_0$. We could confirm this by repeating the experiment where

$H_1: x(t) = w_1(t), \hfill (12)$

$H_0: x(t) = w_2(t), \hfill (13)$

where the spectral density for $w_1(t)$ is greater than than for $w_2(t)$. (If you do the experiment, let me know.)

One way around this problem is to spectrally whiten the data prior to applying our detectors. Here, spectral whitening means applying a linear time-invariant filter to the data. The outcome of the filtering yields a signal whose spectral density is a constant over all frequencies in the sampling bandwidth. So, if a data block has a (measured) PSD of $S(f)$, then the transfer function for the whitening filter $H_w(f)$ is given by

$\displaystyle H_w(f) = \frac{1}{(S(f))^{1/2}}, \hfill (14)$

which follows from elementary random-process theory for the spectral densities of the input and output of a linear time-invariant system.

If we apply whitening to the data on a trial-by-trial basis, we obtain the following performance curves for the baseline case:

Now we see that the performance ordering has changed, and that the false-CF ISMD does not tell us a non-existent signal is present as the actual signal’s SNR increases. Spectral whitening is also useful when inband narrowband interference is present, for much the same reasons as we’ve outlined above.

The spectral whitening is not perfect. The OED begins to detect the signal as the SNR gets large due to this imperfection.

Finally, we note that the use of spectral whitening as a data pre-processing step means that the spectral correlation function estimates used in the various detectors are actually spectral coherence estimates. Coherence strikes again!

#### Variation: Constant-Power BPSK in Constant-Power AWGN with Variable-Power Interference

The interferer is QPSK and has variable (from trial to trial) carrier frequency, power, and symbol rate. It is present on both $H_1$ and $H_0$. Moreover, the randomly chosen interferer carrier frequency is such that the two signals always spectrally overlap, so no linear time-invariant preprocessing step could separate the signals. A typical spectral correlation plot for the combination of the two signals is shown here:

Notice that the two signals cannot be distinguished in the PSD. Relative to the spectral correlation plot for BPSK alone, we see the additional non-conjugate feature that corresponds to the QPSK interferer.

The actual hypotheses for this variation can be expressed as

$H_1: x(t) = s(t) + i(t) + w(t), \hfill (15)$

$H_0: x(t) = i(t) + w(t). \hfill (16)$

The QPSK interferer has random power level that is uniformly distributed in the range $[-10, 10]$ dB. The BPSK signal has a constant power of $0$ dB, so the interferer power ranges from a tenth of the BPSK power to ten times the BPSK power. The interferer’s center frequency is restricted to lie in an interval to the right of the BPSK center frequency. Finally, the interferer bandwidth ranges from one half the BPSK bandwidth to twice the BPSK bandwidth.

Here are some results for this variation, without the use of spectral whitening:

So here there is no need for spectral whitening, because the false-CF detectors will not generally show detection of a false signal. However, spectral whitening works out well in this kind of case, as we will see next.

#### Variation: Variable-Power BPSK in Variable-Power Noise and Interference

In this last variation for the textbook SRRC BPSK signal, the signal, interference, and noise all have variable power from trial to trial. Everything else is the same. Here are the results without whitening:

And now with spectral whitening applied to the data on each trial:

So, with or without spectral whitening, when the signal environment is difficult–contains variable cochannel interference and/or variable noise–the cycle detectors are vastly superior to energy detectors.

### Illustration Using Collected Signals: WCDMA

I captured $10$ minutes of a local WCDMA signal using a (complex) sampling rate of $6.25$ MHz. For each trial in the WCDMA Monte Carlo simulations, I randomly choose a data segment from this long captured signal and add noise to it. A typical spectral correlation function plot for the WCDMA data is shown here:

The significant non-conjugate cycle frequencies are $15$ kHz, $120$ kHz, and $3.84$ MHz (the chip rate). There are no detected significant conjugate cycle frequencies for this data. Notice the frequency-selective channel implied by the WCDMA PSD, which is normally flat across its bandwidth. The observed channel fluctuates over time.

#### Baseline Experiment: WCDMA as Captured in Constant-Power AWGN

The block lengths for the WCDMA experiments are reported in terms of the number of DSSS chips, which have length $1/f_{chip} = 1/3.84 \ \ \mu s$, or $0.26$ micro-seconds. Here is the result for an inband SNR of $-9$ dB and a block length of $40206$ symbols or chips, and no spectral whitening:

So in this benign environment, energy detection is far superior to CSP detection, but the cycle detectors definitely work. We again observe the false-CF detection problem.

#### Variation: WCDMA as Captured in Constant-Power AWGN with Whitening

When spectral whitening is used, we obtain the following ROCs and probabilities:

In this case, the cycle detectors are superior by a few decibels compared to the OED. The SDs for the cycle frequency of $120$ kHz are rather strongly affected by the whitening relative to the other SDs and the MDs. I don’t yet have an explanation for that, but it is clear that the real-world (non textbook) signals are much more complicated than the textbook signals, and application of CSP to the non-textbook signals requires care.

Let me, and the CSP Blog readers, know if you’ve had good or bad experiences with cycle detectors by leaving a comment. And, as always, I would appreciate comments that point out any errors in the post.

I'm a signal processing researcher specializing in cyclostationary signal processing (CSP) for communication signals. I hope to use this blog to help others with their cyclo-projects and to learn more about how CSP is being used and extended worldwide.

## 31 thoughts on “The Cycle Detectors”

1. aaron says:

For equation(11), the false alarm probability should be P[Y>=\mu|H0], right?

1. Yes! Thanks again for finding a typo/error. It is fixed now. Please continue! Find them all!

2. aaron says:

sorry, not “mu”, should be “eta”

3. Paul says:

Is there a conjugate SCF version of Equation (9)? I have not been able to find a good definition in the literature, including [1]. For the non-conjugate SCF, Equation (9) can be normalize as shown in Eqn. (34) in [1], which gives you values between [0,1]. What is the equivalent for the conjugate SCF?

[1] G. Zivanovic and W. Gardner “Degrees of Cyclostationarity and their Application to Detection and Estimation” In Signal Processing March 1991

1. Is there a conjugate SCF version of Equation (9)?

Yes, just replace the non-conjugate SCF with the conjugate SCF.

For the non-conjugate SCF, Equation (9) can be normalize as shown in Eqn. (34) in [1], which gives you values between [0,1]. What is the equivalent for the conjugate SCF?

The conjugate coherence is presented in the coherence post at Equation (14). I believe you can simply replace the (non-conjugate) SCF in (34) with the conjugate SCF, and proceed with a parallel development, substituting the magnitude-squared conjugate coherence for the magnitude-squared non-conjugate coherence in (35).

If the idea is to figure out which cycle frequencies to include in some multi-cycle detector, or which single cycle-frequency to use in a single-cycle detector (for a signal that exhibits several cycle frequencies), you might just convert your complex-valued signal to a real-valued one (analytically) and then use the formulas in [1] without modification.

Does that help?

4. Paul says:

Thanks Chad, that definitely helps. I’ll work through the derivation and see if that gets me a properly scaled ([0,1]) conjugate version.

5. Serg says:

I realized software model (matlab code) of the implementation of spectral correlation for the detection signals and estimation of its parameters.
As a result of the model some questions arised which in my opinion are in contradiction with some of your statements in your blog.

1. As you wrote:
«To apply our cyclostationary signal processing tools effectively, we would like to isolate these signals in time and frequency to the greatest extent possible using linear time-invariant filtering (for separating in the frequency dimension) and time-gating (for separating in the time dimension). Then the isolated signal components can be processed serially.»
In my opinion, if we know central frequency of the filter and which bandwidth use for this filter, we already know enough information about the signal. Then why do we need to imply spectral correlation for the estimation same information via cyclic frequency (central frequency and bandwidth)?
2. After the Cyclic Spectrum Estimation result of the impulse response of the filter prevails over result of the signal. For example, cycle spectrum of the filtered noise (figure 4) is almost the same as cycle spectrum of filtered noise+signal (figure 3) that is, the shape of the Cyclic Spectrum is determined only by the filter.
How can I distinguish a signal case and signal+noise case after pre-filtering?

3. One more problem is the inconsistency of the results of the simple nonlinear transformation with the results of the spectral correlation transformation.
As you wrote:
«That is, the separation between correlated narrowband signal components of x(t) is the same as a frequency of a sine wave that can be generated by a quadratic nonlinearity (for example, a squarer or a delay-and-multiply device) operating on the original time-series data x(t).»
From this it follows that results of the simple nonlinear transformation is “equal” to the results of the spectral correlation transformation.
Though, the results of my modeling give contradictionary conclusions. Amplitude of a sine wave that can be generated by a quadratic nonlinearity at low SNR is enough for the detection of the cyclostationary signals. However, cycle spectrum of a pure noise and cycle spectrum of signal+noise do not differ. Thus, it is impossible to carry out the detection using cycle spectrum (figure 3, figure 4), but possible to carry out the detection using a quadratic nonlinearity (figure 5).

Concequently, is a cyclostationary spectrums for the detection of the signals is not better than usual nonlinear transformation and detection of the cycle frequencies works better with nonlinear transformation?

% close all;
clear all;
clc

M = 2;
k = log2(M);
Fd = 1; %symbol frequency
Fs = 4;%sampling frequency
rolloff = 0.3;
delay = 3;
EbNodB =-5;
data_len = 2^17; % number symbols

df=1/16;
dalpha=df/10000;

%%%%% signal %%%%%%%%%%%%%%%%%%%%%%
h = modem.qammod(‘M’, M, ‘SymbolOrder’, ‘Gray’);
[num den] = rcosine(Fd, Fs, ‘fir/sqrt’, rolloff, delay);

burst = randi(M,data_len,1)-1;
burst_map = modulate(h, burst);
burst_map=burst_map’;
[I_TxWaveform, t] = rcosflt(real(burst_map), Fd, Fs, ‘filter’, num);
[Q_TxWaveform, t] = rcosflt(imag(burst_map), Fd, Fs, ‘filter’, num);

signal = I_TxWaveform + sqrt(-1)*Q_TxWaveform;

% signal=make_bpsk_srrc (data_len, 4, 0.5)’;

%%%%%% noise %%%%%%%%%%%%%%%%%%%%%
EbNo = 10^(EbNodB/10);
EsNo = EbNo*k; % Convert EbNo to EsNo.
EsNodB=10*log10(EsNo);
SNR=EsNodB-10*log10(Fs/Fd);

sigPower = sum(abs(signal(:)).^2)/length(signal(:));
sigPower = 10*log10(sigPower);
noisePower = sigPower-SNR;
noise=wgn(size(signal,1), size(signal,2), noisePower, 1, [], ‘dbw’, ‘complex’);

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%% signal+noise %%%%%%%%%
Signal_N = 1*signal+1*noise;
figure, plot(medfilt1(abs(fft(Signal_N)),511)), hold on, plot(1500*abs(fft(num, length(Signal_N))),’r’)% test

[Signal_N_flt, t] = rcosflt(Signal_N, 1, 1, ‘filter’, num);
% [Signal_N_flt, t] = rcosflt(Signal_N, 1, 1, ‘filter’, 1);
[noise_flt, t] = rcosflt(noise, 1, 1, ‘filter’, num);

%%% SCF %%%%
% df=1/16;
% dalpha=df/10000;

[Sx1,alphao,fo] = commP25ssca(signal,1,df,dalpha);
fig1 = figure(‘Position’,figposition([5 50 40 40]));
commP25plot(Sx1,alphao,fo);
title(‘signal free noise’)
view(180, 0)

[Sx2,alphao,fo] = commP25ssca(Signal_N_flt,1,df,dalpha);
fig1 = figure(‘Position’,figposition([47 50 40 40]));
commP25plot(Sx2,alphao,fo);
title(‘signal+noise’)
view(180, 0)

[Sx3,alphao,fo] = commP25ssca(noise_flt,1,df,dalpha);
fig1 = figure(‘Position’,figposition([47 1 40 40]));
commP25plot(Sx3,alphao,fo);
title(‘noise’)
view(180, 0)

sm=0;
Takt_sm=abs( ( Signal_N_flt(1:end-sm).*conj(Signal_N_flt(sm+1:end)) ) );
spTakt_sm=abs(fft(zscore(Takt_sm)));

fig1 = figure(‘Position’,figposition([5 1 40 40]));
plot(spTakt_sm,’.-‘)

1. Thanks for your detailed comment, Serg, and for reading the CSP Blog.

As a result of the model some questions arised which in my opinion are in contradiction with some of your statements in your blog.

Let’s take a look!

1. As you wrote:
«To apply our cyclostationary signal processing tools effectively, we would like to isolate these signals in time and frequency to the greatest extent possible using linear time-invariant filtering (for separating in the frequency dimension) and time-gating (for separating in the time dimension). Then the isolated signal components can be processed serially.»

Although you placed your comment in the Comments section of “The Cycle Detectors,” this quote of mine is from a post called “Automatic Spectral Segmentation.” Cycle detectors are non-blind detectors, since they require prior knowledge of one or more cycle frequencies. The quote above is in the context of blind detection and signal characterization.

In my opinion, if we know central frequency of the filter and which bandwidth use for this filter, we already know enough information about the signal. Then why do we need to imply spectral correlation for the estimation same information via cyclic frequency (central frequency and bandwidth)?

I’m not quite sure I understand your question, but the idea of the cycle detectors is presence detection: is the signal present or not? So we can have prior information about where the signal might reside in frequency (carrier-frequency knowledge) and/or its bandwidth (symbol-rate knowledge) and still not know whether it is actually present at any given instant of time. Therefore we require a detector.

2. After the Cyclic Spectrum Estimation result of the impulse response of the filter prevails over result of the signal. For example, cycle spectrum of the filtered noise (figure 4) is almost the same as cycle spectrum of filtered noise+signal (figure 3) that is, the shape of the Cyclic Spectrum is determined only by the filter.
How can I distinguish a signal case and signal+noise case after pre-filtering?

I couldn’t run your MATLAB code directly because my (2018) version did not want to run modem.qammod (deprecated and removed), but I substituted a QPSK signal of my own making. I see that you wanted to process a QPSK signal with square-root raised-cosine pulses with roll-off of 0.3 and symbol rate of 1/4, so I generated that kind of signal using my own tools. That’s when I saw what was confusing you: The spectral correlation function for the case of signal+noise showed the same cycle frequencies as the spectral correlation function for noise alone, namely harmonics of the symbol rate (0.25, 0.5, …).

That is clearly an error, and the error is in the SSCA code your were using, which is a MATLAB-provided function called commP25ssca.m. That function is in error. It produces spurious cycle frequencies that are related to the number of strips (the reciprocal of the input parameter df). Please see my new post on this topic.

3. One more problem is the inconsistency of the results of the simple nonlinear transformation with the results of the spectral correlation transformation.
As you wrote:
«That is, the separation between correlated narrowband signal components of x(t) is the same as a frequency of a sine wave that can be generated by a quadratic nonlinearity (for example, a squarer or a delay-and-multiply device) operating on the original time-series data x(t).»

The quote is from a third post called The Spectral Correlation Function. It is true, and it is a statement about cycle frequencies, not comparisons between different complicated nonlinear transformations.

From this it follows that results of the simple nonlinear transformation is “equal” to the results of the spectral correlation transformation.

No, this doesn’t follow.

Though, the results of my modeling give contradictionary conclusions. Amplitude of a sine wave that can be generated by a quadratic nonlinearity at low SNR is enough for the detection of the cyclostationary signals. However, cycle spectrum of a pure noise and cycle spectrum of signal+noise do not differ. Thus, it is impossible to carry out the detection using cycle spectrum (figure 3, figure 4), but possible to carry out the detection using a quadratic nonlinearity (figure 5).

Concequently, is a cyclostationary spectrums for the detection of the signals is not better than usual nonlinear transformation and detection of the cycle frequencies works better with nonlinear transformation?

This is a consequence of the misleading nature of the output of commP25ssca.m. Unfortunately for you, the exact locations of the spurious cycle frequencies are the cycle frequencies of your signal, so it looked to you like the signal’s cycle frequencies were detected whether or not the signal was present! In other words, you chose a symbol rate of $1/4$, and the choice of df = $1/16$ led to spurious cycle frequencies of harmonics of $1/4$. Try setting your symbol rate to $1/5$ and repeating your measurements.

Overall, simple nonlinear transformations (such as squarers) can be quite effective as CSP detectors, but the cycle detectors offer a bit better performance, and they all work fine provided the CSP functions are correctly implemented.

Let me know what you think!

1. Serg says:

As I understand it, the main reason for the problems in the simulation was the error in the implementation of the commP25ssca function and in order to move on, this error should be eliminated.
I didn’t understand the algorithm deeply, because at first I just wanted to see its capabilities, how it works and in order to eliminate this error, it is necessary to understand the algorithm precisely.

6. Mohamed Salah says:

Could you please to put the codes or send me the matlab codes of drawing ROC (Pd vs Pf) and (Pd vs SNR).

1. Thanks for stopping by at the CSP Blog Mohamed!

I don’t usually give out code. Is there a particular problem you are having in creating a $P_D$ versus $P_{FA}$ plot?

7. Chen says:

I have a question about how to detect blind cochannel interference whenever communication signals is present, assumed that cochannel interference is cyclostationary signal. The “blind” means that there is no knowledge about interference.
For example, the blind interferer is BPSK signal, named i(t). The desired received signal is another BPSK signals, named x(t).The hypotheses for this detection can be expressed as
H1: r(t) = s(t) + w(t) + i(t)
H0: r(t) = s(t) + w(t)
w(t) is a AWGN.
How about using spectral whitening to eliminate x(t) and then detecting i(t)? Or there is another method to resolve the question mentioned above?

1. Chen: Thanks for stopping by the CSP Blog, and for the questions.

Cycle detectors are non-blind detectors in that they exploit some known aspect(s) of the signal-to-be-detected. This information is most often a list of cycle frequencies, but can also include the detailed shape of the spectral correlation as a function of frequency $f$.

The problem you describe here looks like it has a BPSK signal with known parameters and an interference signal with unknown parameters, and it is the interference signal that you want to detect. (Please correct me with a reply if this is an incorrect interpretation of what you wrote.) So we can’t use cycle detectors to directly detect the presence of the interferer.

We can’t ever “eliminate” a signal by applying a spectral whitening filter. It just adjusts the relative magnitudes and phases of the frequency components making up the data. I suppose if the input PSD has a Dirac delta function, then the whitening filter will have an infinitely deep notch, but that is not practical.

I think a good approach for your problem here, if I’ve correctly understood it, is to use blind methods of CSP, such as the FAM or SSCA. You can blindly detect the presence of all the significant cycle frequencies, then use your prior knowledge of the signal $s(t)$ to eliminate its cycle frequencies from the list. If anything is left over, it must correspond to $i(t)$.

1. Chen says:

Thanks for your patience and quick reply. Sorry, my mother tongue is not English.
You have correctly understood my question and provided an idea for me. Furmore, please allow me to recommend you a doctoral dissertation, ‘Cyclostationary Methods for Communication and Signal Detection Under Interference’. This paper presented a blind signal detector to detect the presence of the interference signal i(t) whenever d(t) is present. The blind signal detector can be expressed as two hypotheses, H0 and H1 respectively. The received signals are:
H0: r(t) = d(t) + n(t)
H1: r(t) = d(t) + i(t) + n(t)
where d(t) is the desired signal, i(t) is the interference signal, and n(t) is additive white Gaussian noise. The proposed detector measures the output of a time-varying whitening(TVW) filter and compares against a detection threshold in order to determine if a signal is new to the environment. a TVW filter is modified by an adaptive FRESH filter and able to reject known signals in the environment , and pass any others which do not spectrally correlate in the same fashion. The output of the TVW is then used for detection, comparing the power levels of the frequency bins in aggregate against a threshold level. By whitening the spectrum according to the spectral correlation of the disired signal, the TVW is able to pass otherwise interference signal which overlaps in frequency, enabling its detection.
Do you think the proposed detector works? I hope your reply and further communication.

1. I’m not sure why we need the TVW here. Why not simply apply a blind CF estimator, like the SSCA or FAM, to determine all the cycle frequencies that are present. Presumably you know the cycle frequencies for $d(t)$. Other cycle frequencies are then attributed to $i(t)$. All of these cycle frequencies can be used in a FRESH filter to extract $d(t)$ from $r(t)$, or $i(t)$ from $r(t)$, whichever is of interest.

8. Chen says:

I have another question about spectrally whitening. Which signals do you want to filter before applying the proposed cycle detector? Could you please explain the theory of spectrally whitening in more detail? Thanks a lot!

1. You have to apply the spectral whitening filter to the observed data–that is all you have access to.

I’m wondering if you can ask a more specific question about spectral whitening. My question to you is: Do you understand Equation (14)? What about (14) requires clarification? (I can use your response to update/clarify the post itself.)

1. Chen says:

Now I want to ask you a more specific question about spectral whitening.
In the case of using false cycle frequencies, in order to solve the problem that the spectral density for w1(t) is greater than for w2(t). One way is to spectrall whiten the data prior to compute spectral correlation function. And the spectrally whitening means that the spectral of the data is multiplied by the square of the absolute value H(f). The result of value is 1, right? If the result is 1, how to operate the next step, namely, the proposed cycle detecting?
I don’t know if I have expounded my question clearly, because of my poor English. I wish your reply sincerely!

1. Applying the spectral whitening filter to the data leads to a signal whose spectrum is equal to 1 for all frequencies. But the data itself still contains whatever signals it did prior to the whitening, but now they are distorted. You can still apply all the usual cycle-frequency estimators (SSCA, FAM) or any of the cycle detectors. If one of my “optimal” detector structures is used post-whitening, I suppose it should be matched to the whitened (distorted) signal, not the original signal. That is, consider which “ideal” spectral correlation function $\displaystyle S_s^\alpha(f)$ should be used in the optimal single-cycle detector.

9. Chen says:

I have a question about how to detect cochannel interference whenever communication signals is present, assumed that the cochannel interference is cyclostationary signal.
For example, the interferer is BPSK signals, named i(t). The desired received data is x(t). Then the hypotheses for this detection can be expressed as
H1: r(t) = s(t) + w(t) + i(t)
H0: r(t) = s(t) + w(t)
where w(t) means an AWGN, and r(t) means received signals.
Can it be solved by spectrally whitening prior to apply the proposed cycle detector? If it can, how to operate in detail? If not, is there any other cyclostationary-based detection methods to detect interference whenever communication signals is present?
Thanks a lot!

10. Chen says:

How to estimate the PDF of each cycle detector via detection statistic histogram? Can a Generalized Extrem Value (GEV) distribution help to estimate the PDF?

1. The histograms are estimates of the detection-statistic PDFs (when properly scaled). I have made no attempts to fit the empirical histograms/PDFs to known forms.

1. Chen says:

Please allow me to ask you further. I am so curious about how you calculate Pd and Pfa corresponding to each threshold without fitting the histogram to certain known forms?

1. I compute estimates of $P_D$ and $P_{FA}$ empirically. I store all detection statistics from the Monte Carlo simulation. Then I can process them with whatever threshold I like. If I choose a set of thresholds that spans the full range of values for the stored statistics, I can estimate the receiver operating characteristic.

11. ANUSHA R says:

Can you give us the code please

1. Anusha: Thanks for stopping by. I don’t generally give out code. I’ve written some notes about how to obtain help with your CSP work in this post. If you’re having trouble with an estimator or a detector, I suggest you also read the Comments sections of the relevant posts–perhaps others have had the same problem as you and there are valuable hints to be had. Finally, see my post on CSP for Beginners.

12. Philo says:

Just stopped by, really good blog.

1. In Fig.3 T seems to be 16384? The same issue applies for the following figure.

2. If we are evaluating the symbol rate for the 16-QAM with very low roll-off factor(less than 0.03 in recent optics communication), is these methods still work or we’ll have to look for the higher-order cyclostationary statistics?

1. Thanks for the comment Philo!

1. In Fig.3 T seems to be 16384? The same issue applies for the following figure.

Yes, all the figures that show a block length in the figure title express the block length $T$ in terms of symbols. As stated in the post, the BPSK symbol rate is $1/10$, or $10$ samples per symbol, so a reported block length of $1638$ symbols is $16380$ samples. Really I used $16384$ samples, which rounds to $1638$ symbols.

2. If we are evaluating the symbol rate for the 16-QAM with very low roll-off factor(less than 0.03 in recent optics communication), is these methods still work or we’ll have to look for the higher-order cyclostationary statistics?

As long as the roll-off is not zero, the signal possesses second-order cyclostationarity at the non-conjugate cycle frequency of the symbol rate. However, the feature will be weak. This is important because the quality of the SCF measurement depends on the feature strength through the coherence function. See the coherence post, the resolution-product post, and the resolutions post.

Essentially, for a fixed noise power and fixed signal power, a signal with small roll-off will have a less reliable SCF estimate than a signal with large roll-off. This can, in principle, be overcome by increasing the block-length $T$. But the signal must persist long enough to permit sufficiently large $T$.