In this post we take a look at an alternative CSP estimator created by J. Antoni et al (The Literature [R152]). The paper describing the estimator can be found here, and you can get some corresponding MATLAB code, posted by the authors, here if you have a Mathworks account.
I should say that I don’t think there is much wrong with the paper–the presented spectral correlation estimator is fast in some settings. But of course that isn’t the whole story.
The Fast Spectral Correlation (FSC) Estimator
The stated motivation for developing the FSC is the high computational cost, and therefore processing delay, of both the FFT accumulation method (FAM) and the time-smoothing method (TSM) of spectral correlation and spectral coherence estimation. The problem isn’t the TSM itself, but the act of putting the TSM in a loop over cycle frequency to find the significant cycle frequencies in some data record. Putting the frequency-smoothing method (FSM) in a loop is also conceptually attractive as a way to find cycle frequencies. As we already know, the SSCA and FAM are much more efficient (though still costly) for cycle-frequency searches than either the FSM-in-a-loop or the TSM-in-a-loop.
After you read this post, if you are still interested in the FSC, I encourage you to study the source material in [R152]. Here I am going to provide the definition of the FSC mathematically, but I’m not going to rehash all the mathematical development in [R152] that leads to the definition.
Let’s suppose our data is represented by the time-series which is a discrete-time sequence (typically real-valued for Antoni) with length
. The FSC will involve the discrete Fourier transform of a product of short-time Fourier transforms (like those we see in our definition of the TSM). The block length for the short-time Fourier transforms is denoted by
. The temporal spacing between adjacent
-sample blocks is denoted by
samples. For example, non-overlapping adjacent blocks use
. Then the FSC is defined by
where
and
is a data-tapering window with length
,
is the energy in the tapering window,
is
,
,
is the midpoint of the window
(unfortunately named, since
is universally used for the spectral density of white Gaussian noise),
, and the integer
.
Antoni shows that this estimate converges () to the spectral correlation function for cycle frequencies that are below the maximum given by
Since ,
.
So one can see that the FSC is a Fourier transform of a sequence of cyclic periodograms, or things very similar to cyclic periodograms: products of frequency-shifted Fourier transforms, possibly phase-compensated.
The Drawback of the FSC
The problem with the FSC is that it is only fast when the maximum considered cycle frequency is a small fraction of the sampling rate. Moreover, the maximum value of cycle frequency for which one can obtain a spectral correlation estimate must be smaller than half the sampling rate.
Small Maximum Cycle Frequency
Antoni’s example considers a synthetic (meaning “not captured data” or “simulated”) cyclostationary signal sampled at 1 kHz. In the provided implementation and example code, the maximum cycle frequency is set to 40 Hz. The number of processed samples is very close to
, so that the cycle-frequency resolution of any estimator is approximately
.
For TSM or FSM in a loop over cycle frequency, we’d want to visit cycle frequencies on the interval in steps of about
Hz, or about
cycle frequencies. But the principal domain of the spectral correlation function is a diamond-shaped region in the
plane with vertices
. For the non-conjugate spectral correlation function, some of this principal domain is redundant, but none is for the conjugate spectral correlation function. The traditional one-dimensional principal domain for digital signal processing in the frequency domain (
) is subsumed into this two-dimensional cyclic principal domain. So the stated restriction to a “small” cycle frequency means a great many cycle frequencies potentially exhibited by the data under study will go undetected.
The restriction on the maximum cycle frequency that says it should be small relative to isn’t because the algorithm won’t work for larger cycle frequencies, it is because the performance (computational cost) of the FSC is superior to some other methods when the maximum considered cycle frequency is small. When you specify a larger maximum cycle frequency, the algorithm (as Antoni has coded it) adjusts
accordingly. But once
becomes too small (the maximum cycle frequency too large), the FSC becomes quite slow compared to alternatives.
And, as we will see, a good implementation of the SSCA or FAM can produce estimates of all significant cycle frequencies in less time than the FSC can produce estimates of significant cycle frequencies for its highly restricted cycle-frequency range!
Limiting the Cycle Frequency to 
The principal domain is important because it tells us the maximum possible non-conjugate cycle frequency is
. This isn’t an academic point. In the application of CSP to communication signals, an important cycle frequency can easily exceed
and be near
. An example (which we look at below) is a PSK or QAM signal that is only slightly oversampled (the sample rate is slightly greater than the occupied bandwidth). Unlike the restriction based on desiring very fast computational, this restriction is a must–Antoni’s code will not run if the maximum cycle frequency is set equal to
or higher.
This restriction complicates the application of the FSC to communication signals and RF scenes. And, to be fair, that is not Antoni’s interest–he is looking at the link between a fault in a mechanical system that involves rotating parts (for example, bearings) and the cyclostationarity of signals associated with the mechanical system. Imminent faults may be diagnosable earlier by using CSP compared to traditional fault-detection methods in that field.
A common situation for us in communication-signal processing (and that is the stated area of interest of the CSP Blog: Understanding and Using the Statistics of Communication Signals) is processing a complex time-series that arises from the detection of a signal, or signals, in some frequency band of interest using energy detection or spectrum estimation methods. For example, Figure 1 shows the PSD for a simulated BPSK signal in noise that has been extracted from a more wideband scene.

Antoni’s code won’t be useful for such data as in Figure 1, because the unknown cycle frequency is much greater than half the sampling rate. However, we can easily apply the FAM, SSCA, FSM, or TSM to this data to find all its cycle frequencies. The elapsed time, mirroring the number of required multiplications, will vary considerably over these four basic spectral-correlation estimators. The results for applying the SSCA are shown in Figure 2. To process samples of this high-rate BPSK signal using the straightforward (no parallelization, algorithm hacks, etc.) SSCA (non-conjugate) takes about 0.6 seconds.

But let’s get to the point. In the following, I provide timing results for the SSCA, FAM, FSM-in-a-loop, TSM-in-a-loop, and the FSC for Antoni’s test input. I also show some output results to give the reader some confidence that the elapsed times I’m presenting correspond to correct estimator outputs.
Timing Tests
To reset, we are going to use the signal generated by Antoni’s sample FSC code, which has cycle frequencies of and
Hz. It is real-valued data. It is not a communication signal. Here is the code for generating the signal:
% Synthesis of an elementary cyclostationary signal
% =================================================
Fs = 1e3; % sampling frequency (in Hz)
L = 131072; % signal length (number of samples)
f0 = .01*Fs; % cycle frequency (in Hz)
x = randn(L,1);
% filtration by a resonance
a = [1 -2*cos(2*pi*.2)*.9 .9^2];
x = filter(1,a,x);
% periodic amplitude modulation
x = x.*(1 + sin(2*pi*(0:L-1)'*f0/Fs));
% addition of white noise (SNR = 0dB)
x = x + std(x)*randn(L,1);
We are going to process samples and the sampling rate is
kHz. The spectral resolution of all of the estimators is approximately
Hz and of course the cycle-frequency resolution is fixed at
Hz. The elapsed time includes the time required to compute the spectral correlation and the spectral coherence. For the SSCA and FAM, it also includes the time required to apply a coherence-based threshold and perform cycle-frequency binning to yield a minimal set of detected cycle frequencies.
Algorithm | (Hz) | Author | Language | Elapsed Time (s) |
---|---|---|---|---|
FSC | 40 | Antoni | MATLAB | 0.54 |
FSC | 200 | Antoni | MATLAB | 26.55 |
SSCA | CSP Blog | C | 0.40 | |
SSCA | CSP Blog | MATLAB | 1.74 | |
FAM | CSP Blog | MATLAB | 9.16 (*) | |
FSM | 40 | CSP Blog | C | 4.50 |
TSM | 40 | CSP Blog | C | 21.69 |
FSM | 200 | CSP Blog | C | 20.50 |
TSM | 200 | CSP Blog | C | 111.16 |
The FSC is indeed faster than the FSM-in-a-loop and the TSM-in-a-loop provided that the maximum visited cycle frequency is a very small fraction of the sampling rate (here %). However, we can estimate all significant cycle frequencies in less time than the restricted-CF FSC by simply using the SSCA. Moreover, once the maximum cycle frequency is increased to a still-small fraction of the sampling rate equal to
%, the FSC is no longer faster than the FSM-in-a-loop.
As promised, here are some results that confirm that the estimators are all working. First, let’s look at the power spectrum estimates produced by several of the estimators in Figure 3.

Now let’s look at the cyclic-domain profiles (CDPs) produced by the FSC, TSM, and FSM for the case where the maximum visited cycle frequency is set to Hz. The full CDPs are shown in Figure 4 and a zoomed-in version is shown in Figure 5.


Finally, the CDPs for the three algorithms and a maximum visited cycle frequency of Hz are shown in Figure 6.

Figures 4-6 give us good confidence that the elapsed-time numbers in Table 1 are fair to the three algorithms.
Turning to the FAM and SSCA, Figure 7 shows the CDPs produced by the SSCA and FAM for Antoni’s signal.

Finally, all these algorithms can achieve smaller elapsed times by employing some form of parallelism. And some can achieve smaller elapsed times by employing algorithmic modifications that do not significantly reduce the probability of cycle-frequency detection for many input signal types of interest. As a small hint of this kind of development, I employed some parallelism for the SSCA and also employed strip decimation or selection. This just means that only some of the strips are computed (see the SSCA post for more details about strips). The results are shown in Figure 8.
The parallelism is denoted by the integer in Figure 8. Since the input data is real, the spectral correlation function for the first quadrant of the
plane is redundant with the other three quadrants–those three quadrants need not be considered in the selection of the strips to compute. Additionally, we can choose to visit only every
th strip (“strip decimation”). If the signal in the data is sufficiently wideband relative to the sampling rate, we have a low probability of missing its features if we decimate.
Figure 8 shows we can reduce the SSCA run time to 0.13 seconds using these tricks, which are highly applicable to Antoni’s signal.

Conclusions
Antoni et al have developed the fast spectral correlation (FSC) estimator in The Literature [R152]. This estimator resembles the Fourier transform of a sequence of cyclic periodograms. A major drawback of the FSC is that there is always a restriction on the largest cycle frequency that can be visited by the algorithm. In all cases, the maximum visited cycle frequency must be less than half the input-data sampling rate. However, for many applications of interest to the CSP Blog and its readers, signals exhibit significant cycle frequencies much larger than half the sampling rate. (Cycle frequencies can be as large as the sampling rate itself.) A related drawback is that the maximum visited cycle frequency must also be small relative to half the sampling rate for the FSC to be faster than obvious alternatives like the FSM and TSM. We find that if the maximum visited cycle frequency is % of the sampling rate, the FSC is significantly faster than the FSM, but when this maximum is increased to a still-modest
% of the sampling rate, the FSM is faster.
A properly implemented strip spectral correlation analyzer (SSCA) algorithm is faster than any of the algorithms considered here (FSC, FSM, TSM, FAM) and there is no restriction on the cycle frequency–all cycle frequencies will be found. That is, the estimation of the spectral coherence function over the entire plane, application of a threshold, and cycle-frequency consolidation, is faster than the FSC applied to a mere
% of the possible cycle frequencies.
Hi Chad,
This entire post seems disingenuous. Stating that your (unreleased) code is faster than something available for public inspection and execution seems like exactly the sort of claims without substance that you rail against constantly in the deep learning academic literature.
Simultaneously, you complain that deep learning methods are braindead and that your methods are superior, without releasing code that might allow people who are experts at deep learning (but not cyclostationary mathematics) to make a significant contribution.
I mean, it’s your IP, and you’re free to do what you want, but maybe ease up on the repeated theme of “my method is superior, and no you can’t have it” while ripping into other people’s work.
Thanks for stopping by the CSP Blog Steve!
disingenuous: “not candid or sincere, typically by pretending that one knows less about something than one really does.” What do you think I’m pretending about?
It is the algorithm (FAM, SSCA) that is fast. The algorithms are published in many places. You can even get a version of the SSCA from Mathworks. Is the unavailability of my implementation really stopping anyone from pursuing CSP? I think several hundred of the comments on the Blog argue against that idea.
My main criticism of ML applied to modulation recognition is that the various authors pay very little attention to the input data. It is often quite flawed. Criticizing a machine learner for crowing about performance numbers when the inputs used do not correspond to their stated labels doesn’t seem similar to saying my implementation of a publicly available algorithm is faster than another publicly available algorithm.
I did post a giant data set, with CSP MR results, to encourage a public comparison of ML and CSP methods of MR.
What I’ve been pushing back against is the loudly proclaimed high superiority of ML for MR based on highly flawed input data sets, and the related comparisons to “conventional methods” that are really just throwing some collection of features into a neural network. Look around. You’ll find that many people think MR is “solved” due to one paper that uses a data set that nobody cared to look into until my posts. You’ve got it reversed. I’m just pushing back against a lot of unevidenced claims.
Do you not believe in engineers and scientists publicly criticizing each others’ work? Peer review? You say ‘ripping’ like what I’m doing isn’t backed up with evidence. Where have I erred in my critical analyses? Do you have anything substantive to say? I’m genuinely curious.
tl;dr: Antoni’s SCF estimator is indeed fast and accurate. But only fast if you severely restrict the cycle frequencies you want answers for. If you can’t do that, you’re better off with published algorithms like the SSCA and FAM, which I did not invent. I believe this conclusion, backed up by credible evidence, is of some value to some people.
“…experts at deep learning (but not cyclostationary mathematics)…”
There’s a bit of irony here that I won’t try to elucidate, but it should go without saying that if you shy away from mathematics you limit your credibility as an engineer (or even a computer scientist).
Chad: I appreciate the analysis.
Thanks,
John
Hi,
A year later Antoni released a new paper implementing a significant improvement over paper that you present. I think it would be only fair to refer to the new paper.
Gideon:
Thanks! Yes, that would be fair. I looked at the paper I did because several colleagues and friends sent that specific paper to me over the past two years. I guess we all missed the update.
Gideon is talking about “A faster algorithm for the calculation of the fast spectral
correlation,” by Borghesani and Antoni, which is The Literature [R172]. I’ve got a copy (thanks Gideon!) and I’ll post an update when I can.
I look forward to reading this one, but I won’t trust myself to spot the gotchas and subtleties Chad will. Many thanks in advance to Chad for his analysis.
In the meantime, does anybody know if Antoni or Borghesani put out updated Matlab code? I just found some (link below) from Marco Buzzoni that cites Antoni’s new paper, but nothing popped up in Google from the authors themselves.
Matlab Code: https://www.mathworks.com/matlabcentral/fileexchange/71457-cyclic-modulation-spectrum
I’ve posted some thoughts on the paper you mentioned.