Let’s look at another spectral correlation function estimator: the FFT Accumulation Method (FAM). This estimator is in the time-smoothing category, is exhaustive in that it is designed to compute estimates of the spectral correlation function over its entire principal domain, and is efficient, so that it is a competitor to the Strip Spectral Correlation Analyzer (SSCA) method. I implemented my version of the FAM by using the paper by Roberts *et al* (The Literature [R4]). If you follow the equations closely, you can successfully implement the estimator from that paper. The tricky part, as with the SSCA, is correctly associating the outputs of the coded equations to their proper values.

We’ll also implement a coherence computation in our FAM, and use it to automatically detect the significant cycle frequencies, just as we like to do with the SSCA. Finally, we’ll compare outputs between the SSCA and FAM spectral-correlation and spectral-coherence estimation methods. The algorithms’ implementations are not without issues and mysteries, and we’ll point them out too. Please leave corrections, comments, and clarifications in the comment section.

### Definition of the FFT Accumulation Method

The method produces a large number of point estimates of the cross spectral correlation function. In [R4], the point estimates are given by

where the complex demodulates are given by

Equation (2) here is Equation (2) in [R4]. I think it should have a sum over samples, rather than ,

In (1), the function is a data-tapering window, which is commonly taken to be a unit-height rectangle (and therefore no actual multiplications are needed), and in (2), the function is another tapering window, which is often taken to be a Hamming window (can be generated using MATLAB’s hamming.m).

The sampling rate is . In (2) and (3), . The channelizer (short-time hopped) Fourier transforms’ tapering window has width , and the output (long-time) Fourier transforms’ tapering window has width , which is the length of the data-block that is processed.

So the FAM channelizes the input data using short Fourier transforms of length , which are hopped in time by samples. This results in a sequence of transforms that has length

where here I am assuming that both and are dyadic integers, with . Therefore, the length of the output Fourier transforms is .

For (1), [R4] defines the cycle-frequency resolution as

in normalized-frequency units. Finally, the point estimate is associated with the cycle frequency and the spectral frequency , which are defined in terms of the spectral components involved in the output transform:

and

### Basic Steps in Implementing the FAM in Software

#### Step 1: Find and Arrange the -Point Data Subblocks

Thinking in terms of MATLAB coding, we’d like to perform as many vector or matrix operations as possible, and as few for-loop operations as possible. So when we extract our blocks of samples, sliding along by samples, we can place each one in a column of a matrix:

Note that to achieve the full set of blocks, we’ll need to add a few zeroes to the end of the input . So now we have a matrix with rows and columns.

#### Step 2: Apply Data-Tapering Window to Subblocks

We will be Fourier transforming each column of the data-block matrix from Step 1, but before that we’ll apply the channelizer data-tapering window called above. Let’s pick the Hamming window, available in MATLAB as the m-file function hamming.m. Let’s denote this particular choice for by . Each column of our data-block matrix needs to be multiplied by a Hamming window with length :

#### Step 3: Apply Fourier Transform to Windowed Subblocks

Next, apply the Fourier transform to each column. This is easy in MATLAB with fft.m, but there is a complication. The relative delay that exists between each of the blocks in the matrix is lost when fft.m is applied to each column. That is, (2) above is not exactly computed; the phase relationship between the transforms is modified through the use of fft.m, which we want to use for computational efficiency.

So after the FFT is applied to the data blocks, they need to be phase-shifted. The phase shift for a particular element depends on the frequency and the time index . This is similar to what we do in the time-smoothing method of spectral correlation estimation; see this post for details.

Do the same thing for the input . If , then don’t bother repeating the computation, otherwise, do so:

#### Step 4: Multiply Channelized Subblocks Together and Fourier Transform

Looking back at (1), we now need to multiply (elementwise) one row from the matrix by one row from the matrix, conjugating the latter. This will result in a vector of complex values, which can then be transformed using the FFT. For example, below I’ve boxed the values for and the values for .

#### Step 5: Associate Each Fourier Transform Output with the Correct

According to (1), the values that arise from the Fourier transform of the channelizer product vector correspond to the cycle frequencies

where and are defined in (5) and (6), and ranges over integers. This association leads to values in the familiar diamond-shaped principal domain of the spectral correlation function; any values that do not lie in that region can be discarded. So at this point, we have a large number of spectral correlation function point estimates for frequencies in the (normalized) range and cycle frequencies in the range .

### Extension to the Conjugate Spectral Correlation Function

If , then , and the estimate produced by (1) corresponds to the (auto) non-conjugate spectral correlation function. If , then the estimate corresponds to the conjugate spectral correlation function. Otherwise, it is a generic cross spectral correlation function. The extension to the conjugate spectral correlation function is that easy! It’s only a little more complicated to extend the conjugate spectral correlation function to the conjugate coherence than it is to extend the non-conjugate spectral correlation to the non-conjugate coherence.

### Extension to Coherence

Recall that the spectral coherence function, or just coherence, is defined for the non-conjugate spectral correlation function by

The conjugate coherence is given by

To compute estimates of the coherence, then, go through the spectral correlation estimates one by one, find the associated spectral frequency and cycle frequency from Step 5, and then use a PSD estimate to find the corresponding two PSD values that form the normalization factor. I typically use a side estimate of the PSD that is highly oversampled so it is easy to find the required PSD values for any valid combination of spectral frequency and cycle-frequency shift . The frequency-smoothing method is a good choice for creating such PSD estimates.

The coherence is especially useful for automatic detection of significant cycle frequencies in a way that is invariant to signal and noise power levels, as described in the comments of the SSCA post.

### Examples

Let’s look at the output of the FAM I’ve implemented with an eye toward comparing to the strip spectral correlation analyzer and (of course!) to the known spectral correlation surface for our old friend the rectangular-pulse BPSK signal.

#### Rectangular-Pulse BPSK (Textbook Signal)

First, let’s review the theoretical spectral correlation function for a rectangular-pulse BPSK signal with independent and identically distributed bits, ten samples per bit, and a carrier offset of :

The signal exhibits non-conjugate cycle frequencies that are multiples of the bit rate, or , which for is the set . Due to symmetry considerations, we ignore the negative non-conjugate cycle frequencies in our plots.

It also exhibits the conjugate cycle frequencies that are the non-conjugate cycle frequencies plus the doubled carrier . The shape of the conjugate spectral correlation function for is the same as that for the non-conjugate spectral correlation function for (the PSD).

Let’s start the progression of FAM results for rectangular-pulse BPSK with the FAM power spectrum estimate together with a TSM-based PSD estimate for comparison:

Both PSD estimates look like what we expect for the signal, but you can see a small regular ripple in the FAM estimate, which is not in the TSM estimate, and which we know is not a true feature of the PSD for the signal. So that is a mystery I’ve not yet solved. We’ll see, though, that overall the FAM implementation I’ve created compares well to the SSCA outputs in terms of cycle frequencies, spectral correlation magnitudes, and spectral coherence magnitudes.

Next, I want to show the FAM-based non-conjugate and conjugate spectral correlation surfaces. Let’s first mention the processing parameters:

The latter parameter is used, together with and , to compute a threshold for the coherence function. Only those point estimates that correspond to a coherence magnitude that exceeds the threshold are included in the following FAM spectral correlation surface plots:

So the FAM surfaces agree with the ideal surfaces in terms of the known cycle frequency values and the variation over spectral frequency for each coherence-detected . In other words, it works.

In the following graphs, I show the cyclic domain profiles for the FAM and for the SSCA for comparison:

Finally, here are plots of only the coherence-threshold detected cycle frequencies, which is a typical desired output in CSP practice:

Only true cycle frequencies are detected by both algorithms. The average values for spectral correlation and coherence for all the other cycle frequencies are about the same between the FAM and the SSCA. Two anomalies are worth mentioning. The first is that the SSCA produces a coherence significantly greater than one for the doubled-carrier conjugate cycle frequency of . The second is that the FAM produces a few false peaks in the spectral correlation function (near and ). These all have coherence magnitudes that do not exceed threshold, so they don’t end up getting detected and don’t appear in the later plots. I don’t yet know the origin of these spurious spectral correlation peaks, and if you have an idea about it, feel free to leave a comment below.

#### Captured DSSS BPSK

Let’s end this post by showing FAM and SSCA results for a non-textbook signal, a captured DSSS BPSK signal. Recall that DSSS BPSK has many cycle frequencies, both non-conjugate and conjugate, and that the number of cycle frequencies increases as the processing gain increases. See the DSSS post and the SCF Gallery post for more details and examples.

In this example, the sampling rate is arbitrarily set to MHz, and the number of processed samples is .

I’m curious if choosing different window functions would help alleviate the differences between FAM and SSCA. I know that common practice is a window to determine spectral peak location, and a completely different one to accurately determine actual peak power.

A Hann/Hamming window is a middle ground window AFAIK – a bit of both worlds.

That sounds right to me. I actually use a different window on the “channelizer” Fourier transforms in the SSCA than I do in the FAM. I should redo a couple of those FAM/SSCA comparison calculations using a couple different windows–and show a couple where the two algorithms use the same window. Thanks Mirko!

Hi Mr.Spooner

I couldnt understand how shift phase of data after fft in step (3), it is also not clear in time smoothing post for me. I will be appreciated if you will explain what is it ?

Best regards.

Looking at Eq (1) in the TSM post, we see that the right side is almost the DFT (FFT). The difference is in the argument of the data function , which is . If , the right side is the DFT. So as we slide along in time with , the increasing value of $u$ causes the appearance of a complex exponential factor multiplying a DFT. When we use the FFT to compute the DFT, this factor is ignored because each call to the FFT function processes a data vector that starts at time equal to zero.

In other words, take (1) and evaluate (write the equation) it for and for . You will see that the latter possesses the same form as the former, but also has a complex exponential factor.

If we use the FFT to compute the transforms of successive blocks, we’ll lose the phase relationship between the frequency components for some in each block. So we have to compensate for that. The factor in the cyclic periodogram is shifted by and the factor is shifted by . These multiply to yield a factor of . Since takes on the values , we need to compensate the th cyclic periodogram by .

I may have got some minus signs wrong, but that is the basic idea. The FFT is not a sliding transform.

Does that help?

Thanks Mr.Spooner

I have some questions again, what should be dimensions of matrix which is result of step 4, actually i cant understand how multiply X(rL,f) and its conjugate. You said elementwise multiplication, but I couldnt catch how to multiply elementwise. For instance, should I multiply first rows of two matrix (then second rows and so on), or else?

In step 5 mapping to cycle frequency and frequency doesnt clear for me, can i do this mapping as SSCA method or i should use different mapping way?

If you want to store the result of Step 4 in a matrix, it would have dimension . Each of the two matrices has rows, and you want to multiply all possible pairs of rows. By ‘elementwise multiplication’ I just mean the following. Let and . Then the elementwise multiplication of and is .

The mapping in Step 5 is given by Equations (5), (6), and (8)$. What is your specific question about that?

I understood that formulae but actually I dont understand how i can implement formulae on my matrix in Matlab. This part is not clear for me contrary to first 4 step. Could you explain this part clearly again?

Also, i found some Matlab codes about this topic and i saw that some peoples takes transpose of matrix which is result of step 4 , is it right?

Best regards Mr. Spooner

I am struggling to understand what you don’t understand. Not enough detail in your question?

Regarding your question about some found MATLAB code, I can’t comment on what I can’t see. There are lots of ways to implement an equation, typically, so I can’t say if what you found is correct or not. Does it produce the correct result?

Hello Mr. Spooner,

I couldn’t understand how to implement shifted SCFs in Matlab or another simulation environment. For example, let say that alpha and f are equal to 1 an 0.5 respectively. In this situation, f+alpha/2 becomes 1 which is not a value f can take. If f+(or -)alpha/2 exceeds limited region of frequency [-0.5, 0.5), what is the thing I must do?

Thank for valuable contribution to cyclo-lover society.

Thanks Yorgo!

Well, the principal domain of the spectral correlation function is a diamond in the plane, and the tips of the diamond are at . But as you note, if you shift the DFT to the left by and to the right by , you will not have any overlap between the two DFTs when you go to multiply them together. So is a boundary case of little practical importance.

In general, only use those pairs of frequencies for which each frequency lies in , and ignore the rest.

Thanks for your response. I have a new question why we expect the cyclic frequencies at alpha=k*f_bit for integer values of k. I feel that if the PSD is multiplied by PSD shifted with alpha, this multiplication gives a information about SCF. The point I’m wondering is that the PSD is a continuous function of BPSK sign, so why does not the shift of this signal for any alpha value give a peak?

The spectral correlation function is not the result of correlating shifted PSDs. It is the correlation between two complex-valued narrowband downconverted components of the input signal. These two narrowband signal components are correlated only when the separation between their center frequencies (before downconversion of course) is equal to a cycle frequency, and the particular cycle frequencies depend on the details of the modulation type of the signal in the input data. Maybe take another look at the introductory post on spectral correlation?

Hello Mr. Spooner,

I have some questions about FFT Accumulation Method, firstly, I implemented this method in Matlab to investigate cyclostationary of some basic signal, However, i am stucked at some point, first of all i understood parameter N’ depends on us, or application. I tried my code for different N’ values and I saw a liitle bit different results. What is reason for this and what is optimal way to choose N’ ?

Secondly, sample size of input data is affects cyclic spectrum of signals importantly, is there any way to compansate effects of lower sample sizes?

As you know Matlab is very slow program and computational complexity of algorithm is high.

Thirds, ı investigate cyclic spectrum of OFDM signals especially and i read ur paper which is called “On the Cyclostationarity of OFDM and SC-Linearly Digitally Modulated Signals in Time Dispersive Chnnels: Theoretical Developments and Applications”, but algebra in paper is little bit complex and confusing for me. Can you suggest any resource to understand cyclostationary of OFDM Signals?

Lastly,I am thankful to you for this blog, it really helps me for my workings.

Best regards.

Manuel

Manuel, thanks very much for checking out the CSP Blog. I appreciate readers like you.

Regarding , in the FAM it controls the effective spectral resolution of the measurement. The total amount of processed data is samples, which we often call . Recall from the post on the resolution product that the variability in an SCF estimate is inversely proportional to the resolution product .

We know the estimate should get better the larger is, and for very small you won’t even be processing a single period of cyclostationarity (for BPSK, the symbol interval), so small must result in poor estimates. For larger than that, but still small, I know of little that can be done except through making larger.

MATLAB isn’t all that slow in my opinion. It really does not like to do ‘for loops’ though, and nested for loops can definitely slow it down. Try to restructure your MATLAB code so that you eliminate as many for loops as possible (use matrix operations). My FAM implementation takes about 10 seconds to compute the full non-conjugate SCF and coherence for , , and . And that includes making several plots.

For the cyclostationarity of OFDM signals, I can only suggest that Google is your friend.

Hello Mr.Spooner

Thanks for your reply, i also read your blog about higher order cyclostationary to calculate cyclic cumulant of any kind of signal but i couldnt imagine how to write code to find cyclic cumulants in Matlab. You said that cyclic cumulant is Fourier series coefficient of higher order cumulant. I thought that if i can calculate higher order cumulant of signal then also find Fourier series coefficients of that cumulant, i can get my cyclic cumulant values. However , I cant find any way how to take values of delay vector in cumulant formula. I took delay vector is zero vector and in that case my cumulant result is scalar and i didnt find Fourier series coefficients. If this idea is true, how i should identify delay vector ?

Secondly, I implemented FAM method as i said in previous comment. I thought, i find cyclic autocorrelation which is second order cyclic cumulant using FAM method, then i also find higher order cyclic cumulunt with this method. Is this idea is true? If this is true how, can you give detailed explanation as in your post FFT Accumulation Method for cyclic autocorrelation and spectral correlation function.

Thank you.

Best Regards.

Manuel

Do you mean the post on moment and cumulant estimators? I think that is a good place for you to start. There are several types of estimators described there, including one that actually produces the time-varying cyclic cumulant function (using synchronized averaging). The delays in the expressions for temporal moments and cumulants are just the delays applied to the input data block before creating a lag product (sometimes called a delay product) such as . So you can see that is a function of time (not a scalar), as well as a function of the delays (lags) . If the delays are small relative to the block length (a common case), then you can use MATLAB’s circshift.m to implement them with some small loss of fidelity because it is a circular shift rather than a true delay.

The delay vector choice is driven by the form of the theoretical cyclic cumulant for the signal of interest as well as by the algorithm: what are you going to do with the cyclic cumulant estimates. For many signals, the delay consisting of all zeros corresponds to the cyclic cumulant with the largest magnitude over all possible delay vectors.

Well, the FAM produces an estimate of the spectral correlation function, which is not the second-order cyclic cumulant. I suppose you could try to inverse Fourier transform the FAM output to get the cyclic autocorrelation, which is indeed the second-order cyclic cumulant for signals that do not contain finite-strength additive sine-wave components. It is difficult to directly use the FAM or SSCA to find the higher order cyclic cumulants. Probably it is best to start in the time domain instead of trying to use the frequency-domain FAM.

I think the best thing for you to do is study the post on estimation of higher-order temporal moments and cumulants. Let me know what you think!

Hi Chad

Thank you for your very useful blog!

Here, I have a couple of questions on the implementation of FAM.

1. I understand that after the first N’-point FFT on the windowed signal (step 3), each row denotes a frequency bin from -fs/2:fs/N’:fs/2-fs/N’. After the second P-point FFT on the multiplication of channelized subblocks (step 4), what does each column correspond to?

2. After step 4, I have an array with P x N’ x N’ data. shall I just choose those that satisfies -fs<=alpha<=fs and -0.5*fs<=f<=0.5*fs from P x N' x N' data to generate f-alpha plot?

3. In some thesis (www.dtic.mil/docs/citations/ADA311555), I saw that the author only used P/4:3*P/4 column data (use fft and fftshift) after step 4 to generate f-alpha plot. Do you think it is just for the data filtering purpose?

Thanks for your interest and the comment Kevin; I really appreciate readers like you.

Do you mean other than Eq (8) in the post? I’m wondering if I’ve been clear in Step 5, which tries to explain the meaning of the various values coming out of the -point transforms…

Yes, that’s exactly what I do to produce the various plots that I display in the post.

I’m not exactly sure why they chose to do it that way, but I suppose if you carefully do the step above (your question 2, my algorithm Step 5), and keep track of which parts of the -point transforms you are saving, you might then translate that result into a more efficient step involving the matrix manipulations you mention. I admit that my MATLAB FAM implementation is not optimized for run time; I just wanted to make sure it was essentially correct. I generally use the SSCA for actual CSP work. Good question! Let me know through a comment here if you discover the exact reason why they do what they do in that thesis, and if you agree with it in the end.

Hi Chad

Thank you for your reply!

On your following comment, I believe your description of Step 5 is clear to me except for the range of q value in equation (8). Is it from -P/2 to P/2-1 or from 0 to P-1?

Another question I have is when my signal is composed of two different signals, e.g., one sin and one cos function, is the SCF the sum of SCF of sin and cos?

Thanks!

Kevin

Well, did you try each way and see whether one gives the expected answer for an input for which you know the correct set of cycle frequencies and spectral correlation function magnitudes? I start from 0, but this question is probably easily answered by you if you’ve got the basic FAM working.

This isn’t a completely clear question to me, but it lies in a subtle area of CSP. When you have the sum of statistically independent zero-mean signals, the SCF of the sum is the sum of the SCFs for each summand in the sum. But every word there is important, and “zero-mean” refers to a mean of zero in the fraction-of-time probability framework. That is, a sine wave is not a zero-mean signal in the FOT framework. But if by “sin and cos” you really mean two modulated sine waves in quadrature (such as in QPSK), then, yes, you can add the SCFs for each of the quadrature components provided the modulating signals are themselves statistically independent (which is generally true for QPSK).

For discussions of these kinds of subtle issues, I suggest my two-part HOCS paper (My Papers [5,6]) or my doctoral dissertation.

Hi Dear spooner,

I am working on an FFT algorithm for acquisition and tracking on weak and high dynamic signals in deep space , can you give me some idea?

Taymaz:

Thanks for visiting the CSP Blog.

Can you elaborate on your request? For example, I can’t tell if your problem involves CSP or not. The signal that you might be tracking could be a simple time-varying sine wave, in which case CSP doesn’t have much to offer over Fourier analysis.

Hi Chad

On the extension to the conjugate spectral correlation function, you explained it based on the auto spectral correlation function. For the cross spectral correlation function x(t)!=y(t), is the conjugate spectral correlation function also calculated by input x(t) and conj(y(t)) using FAM?

Thanks!

K

Yes. Suppose we have two time-series and . Then:

If and , the FAM produces the non-conjugate auto SCF for ,

if and , the FAM produces the conjugate auto SCF for ,

if and , the FAM produces the non-conjugate cross SCF,

if and , the FAM produces the conjugate cross SCF.

There are two cross versions for each choice of conjugation. That is, you can have or .