CSP Estimators: The FFT Accumulation Method

Let’s look at another spectral correlation function estimator: the FFT Accumulation Method (FAM). This estimator is in the time-smoothing category, is exhaustive in that it is designed to compute estimates of the spectral correlation function over its entire principal domain, and is efficient, so that it is a competitor to the Strip Spectral Correlation Analyzer (SSCA) method. I implemented my version of the FAM by using the paper by Roberts et al (The Literature [R4]). If you follow the equations closely, you can successfully implement the estimator from that paper. The tricky part, as with the SSCA, is correctly associating the outputs of the coded equations to their proper \displaystyle (f, \alpha) values.

We’ll also implement a coherence computation in our FAM, and use it to automatically detect the significant cycle frequencies, just as we like to do with the SSCA. Finally, we’ll compare outputs between the SSCA and FAM spectral-correlation and spectral-coherence estimation methods. The algorithms’ implementations are not without issues and mysteries, and we’ll point them out too. Please leave corrections, comments, and clarifications in the comment section.

Definition of the FFT Accumulation Method

The method produces a large number of point estimates of the cross spectral correlation function. In [R4], the point estimates are given by

\displaystyle S_{xy_T}^{\alpha_i + q\Delta\alpha} (rL, f_j)_{\Delta\!t} \sum_{r}X_T(rL, f_k) Y_T^* (rL, f_l) g_c(n-r) e^{-i2\pi r q /P}  \hfill (1)

where the complex demodulates are given by

\displaystyle X_T(n,f) = \sum_{r=-N^\prime/2}^{N^\prime/2} a(r) x(n-r) e^{-i 2 \pi f(n-r)T_s} \hfill (2)

Equation (2) here is Equation (2) in [R4]. I think it should have a sum over N^\prime samples, rather than N^\prime+1,

\displaystyle X_T(n,f) = \sum_{r=-N^\prime/2}^{N^\prime/2-1} a(r) x(n-r) e^{-i 2 \pi f(n-r)T_s} \hfill (3)

In (1), the function g_c(n) is a data-tapering window, which is commonly taken to be a unit-height rectangle (and therefore no actual multiplications are needed), and in (2), the function a(r) is another tapering window, which is often taken to be a Hamming window (can be generated using MATLAB’s hamming.m).

The sampling rate is \displaystyle f_s = 1/T_s. In (2) and (3), \displaystyle T = N^\prime T_s. The channelizer (short-time hopped) Fourier transforms’ tapering window \displaystyle a(r) has width \displaystyle T_s N^\prime, and the output (long-time) Fourier transforms’ tapering window \displaystyle g_c(r) has width NT_s, which is the length of the data-block that is processed.

So the FAM channelizes the input data using short Fourier transforms of length \displaystyle N^\prime, which are hopped in time by L samples. This results in a sequence of transforms that has length

\displaystyle P = N/L \hfill (4)

where here I am assuming that both N and L are dyadic integers, with L \ll N. Therefore, the length of the output Fourier transforms is P.

For (1), [R4] defines the cycle-frequency resolution as

\displaystyle \Delta\alpha = 1/N \hfill (5)

in normalized-frequency units. Finally, the point estimate is associated with the cycle frequency \displaystyle \alpha_i and the spectral frequency \displaystyle f_j, which are defined in terms of the spectral components involved in the output transform:

\displaystyle \alpha_i = f_k - f_l \hfill (6)


\displaystyle f_j = (f_k + f_l)/2 \hfill (7)

Basic Steps in Implementing the FAM in Software

Step 1: Find and Arrange the \displaystyle N^\prime-Point Data Subblocks

Thinking in terms of MATLAB coding, we’d like to perform as many vector or matrix operations as possible, and as few for-loop operations as possible. So when we extract our blocks of N^\prime samples, sliding along by L samples, we can place each one in a column of a matrix:


Note that to achieve the full set of P blocks, we’ll need to add a few zeroes to the end of the input x(t). So now we have a matrix with N^\prime rows and P columns.

Step 2: Apply Data-Tapering Window to Subblocks

We will be Fourier transforming each column of the data-block matrix from Step 1, but before that we’ll apply the channelizer data-tapering window called a(\cdot) above. Let’s pick the Hamming window, available in MATLAB as the m-file function hamming.m. Let’s denote this particular choice for a(\cdot) by h(\cdot). Each column of our data-block matrix needs to be multiplied by a Hamming window with length N^\prime:


Step 3: Apply Fourier Transform to Windowed Subblocks

Next, apply the Fourier transform to each column. This is easy in MATLAB with fft.m, but there is a complication. The relative delay that exists between each of the blocks in the matrix is lost when fft.m is applied to each column. That is, (2) above is not exactly computed; the phase relationship between the transforms is modified through the use of fft.m, which we want to use for computational efficiency.


So after the FFT is applied to the data blocks, they need to be phase-shifted. The phase shift for a particular element depends on the frequency f_j and the time index rL. This is similar to what we do in the time-smoothing method of spectral correlation estimation; see this post for details.

Do the same thing for the input y(t). If x(t) == y(t), then don’t bother repeating the computation, otherwise, do so:


Step 4: Multiply Channelized Subblocks Together and Fourier Transform

Looking back at (1), we now need to multiply (elementwise) one row from the X matrix by one row from the Y matrix, conjugating the latter. This will result in a vector of P complex values, which can then be transformed using the FFT. For example, below I’ve boxed the X values for f_k = f_1 and the Y values for f_l = f_2.



Step 5: Associate Each Fourier Transform Output with the Correct \displaystyle (f, \alpha)

According to (1), the P values that arise from the Fourier transform of the channelizer product vector correspond to the P cycle frequencies

\displaystyle \alpha_i + q\Delta\alpha \hfill (8)

where \displaystyle \Delta\alpha and \displaystyle \alpha_i are defined in (5) and (6), and q ranges over P integers. This association leads to values in the familiar diamond-shaped principal domain of the spectral correlation function; any values that do not lie in that region can be discarded. So at this point, we have a large number of spectral correlation function point estimates for frequencies in the (normalized) range \displaystyle [-0.5, 0.5) and cycle frequencies in the range \displaystyle [-1.0, 1.0).

Extension to the Conjugate Spectral Correlation Function

If \displaystyle x(t) = y(t), then \displaystyle Y_T(t,f) = X_T(t,f), and the estimate produced by (1) corresponds to the (auto) non-conjugate spectral correlation function. If \displaystyle y(t) = x^*(t), then the estimate corresponds to the conjugate spectral correlation function. Otherwise, it is a generic cross spectral correlation function. The extension to the conjugate spectral correlation function is that easy! It’s only a little more complicated to extend the conjugate spectral correlation function to the conjugate coherence than it is to extend the non-conjugate spectral correlation to the non-conjugate coherence.

Extension to Coherence

Recall that the spectral coherence function, or just coherence, is defined for the non-conjugate spectral correlation function by

\displaystyle C_x^\alpha(f) = \frac{S_x^\alpha(f)}{\left[ S_x^0(f+\alpha/2)S_x^0(f-\alpha/2) \right]^{1/2}} \hfill (9)

The conjugate coherence is given by

\displaystyle C_{x^*}^\alpha(f) = \frac{S_{x^*}^\alpha(f)}{\left[ S_x^0(f+\alpha/2) S_x^0(\alpha/2 - f) \right]^{1/2}} \hfill (10)

To compute estimates of the coherence, then, go through the spectral correlation estimates one by one, find the associated spectral frequency f and cycle frequency \alpha from Step 5, and then use a PSD estimate to find the corresponding two PSD values that form the normalization factor. I typically use a side estimate of the PSD that is highly oversampled so it is easy to find the required PSD values for any valid combination of spectral frequency f and cycle-frequency shift \pm\alpha/2. The frequency-smoothing method is a good choice for creating such PSD estimates.

The coherence is especially useful for automatic detection of significant cycle frequencies in a way that is invariant to signal and noise power levels, as described in the comments of the SSCA post.


Let’s look at the output of the FAM I’ve implemented with an eye toward comparing to the strip spectral correlation analyzer and (of course!) to the known spectral correlation surface for our old friend the rectangular-pulse BPSK signal.

Rectangular-Pulse BPSK (Textbook Signal)

First, let’s review the theoretical spectral correlation function for a rectangular-pulse BPSK signal with independent and identically distributed bits, ten samples per bit, and a carrier offset of 0.05:


The signal exhibits non-conjugate cycle frequencies that are multiples of the bit rate, or \displaystyle \alpha = kf_{bit}, k = 0, \pm 1, \pm 2, \ldots, which for \displaystyle f_{bit} = 0.1 is the set \displaystyle \{0, \pm 0.1, \pm 0.2, \ldots\}. Due to symmetry considerations, we ignore the negative non-conjugate cycle frequencies in our plots.

It also exhibits the conjugate cycle frequencies that are the non-conjugate cycle frequencies plus the doubled carrier 2f_c = 2(0.05) = 0.1. The shape of the conjugate spectral correlation function for \alpha = 2f_c is the same as that for the non-conjugate spectral correlation function for \alpha = 0 (the PSD).

Let’s start the progression of FAM results for rectangular-pulse BPSK with the FAM power spectrum estimate together with a TSM-based PSD estimate for comparison:



Both PSD estimates look like what we expect for the signal, but you can see a small regular ripple in the FAM estimate, which is not in the TSM estimate, and which we know is not a true feature of the PSD for the signal. So that is a mystery I’ve not yet solved. We’ll see, though, that overall the FAM implementation I’ve created compares well to the SSCA outputs in terms of cycle frequencies, spectral correlation magnitudes, and spectral coherence magnitudes.

Next, I want to show the FAM-based non-conjugate and conjugate spectral correlation surfaces. Let’s first mention the processing parameters:

  • \displaystyle N = 32768
  • \displaystyle N^\prime = 32
  • \displaystyle L = N^\prime / 4
  • \displaystyle P_{FA} = 10^{-9}

The latter parameter is used, together with N and N^\prime, to compute a threshold for the coherence function. Only those point estimates that correspond to a coherence magnitude that exceeds the threshold are included in the following FAM spectral correlation surface plots:



So the FAM surfaces agree with the ideal surfaces in terms of the known cycle frequency values and the variation over spectral frequency f for each coherence-detected \alpha. In other words, it works.

In the following graphs, I show the cyclic domain profiles for the FAM and for the SSCA for comparison:


Finally, here are plots of only the coherence-threshold detected cycle frequencies, which is a typical desired output in CSP practice:


Only true cycle frequencies are detected by both algorithms. The average values for spectral correlation and coherence for all the other cycle frequencies are about the same between the FAM and the SSCA. Two anomalies are worth mentioning. The first is that the SSCA produces a coherence significantly greater than one for the doubled-carrier conjugate cycle frequency of 0.1. The second is that the FAM produces a few false peaks in the spectral correlation function (near 0.6, 0.7, and 0.8). These all have coherence magnitudes that do not exceed threshold, so they don’t end up getting detected and don’t appear in the later plots. I don’t yet know the origin of these spurious spectral correlation peaks, and if you have an idea about it, feel free to leave a comment below.

Captured DSSS BPSK

Let’s end this post by showing FAM and SSCA results for a non-textbook signal, a captured DSSS BPSK signal. Recall that DSSS BPSK has many cycle frequencies, both non-conjugate and conjugate, and that the number of cycle frequencies increases as the processing gain increases. See the DSSS post and the SCF Gallery post for more details and examples.

In this example, the sampling rate is arbitrarily set to 5 MHz, and the number of processed samples is N = 65536.



5 thoughts on “CSP Estimators: The FFT Accumulation Method

  1. Mirko von Leipzig says:

    I’m curious if choosing different window functions would help alleviate the differences between FAM and SSCA. I know that common practice is a window to determine spectral peak location, and a completely different one to accurately determine actual peak power.

    A Hann/Hamming window is a middle ground window AFAIK – a bit of both worlds.

    • That sounds right to me. I actually use a different window on the “channelizer” Fourier transforms in the SSCA than I do in the FAM. I should redo a couple of those FAM/SSCA comparison calculations using a couple different windows–and show a couple where the two algorithms use the same window. Thanks Mirko!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.