In this post, I introduce the *cyclic autocorrelation function* (CAF). The easiest way to do this is to first review the conventional autocorrelation function. Suppose we have a complex-valued signal defined on a suitable probability space. Then the mean value of is given by

For stationary signals, and many cyclostationary signals, this mean value is independent of the *lag parameter* , so that

The autocorrelation function is the correlation between the random variables corresponding to two time instants of the random signal, or

To see how the autocorrelation varies with some particular central time , we can use a more convenient parameterization of the two time instants and , such as

where

and

So time represents the center point of the two time instants and is their separation. If the autocorrelation depends only on the separation between the two time instants, and not their center point, the signal is *stationary of order two*, or just *stationary*, and we have

For nonstationary signals, on the other hand, the autocorrelation does depend on time . For the special case of nonstationary signals called *cyclostationary signals*, the autocorrelation is either a periodic function or an almost periodic function. In either case, it can be represented by a Fourier series

where is a Fourier-series coefficient called the *cyclic autocorrelation function. *The Fourier frequencies are called *cycle frequencies* (CFs). The CAFs are obtained in the usual way for Fourier coefficients,

If the signal is a cycloergodic signal (or we are using fraction-of-time probability), then the CAFs can be obtained directly from a sample path (the signal itself),

For many cyclostationary signals, the *conjugate autocorrelation function* is also non-zero (and also useful). This function is defined by

and is represented by its own Fourier series

I explain in detail why we need two autocorrelation functions in this post. The problem is worse when we look at higher-order moments and cumulants, where we need functions to properly characterize a signal at order .

And that is it! These are the basic definitions for the (second-order) probabilistic parameters of cyclostationary signals in the time-domain. In later posts, I’ll have much to say about their utility, their estimation, and their connection to the frequency-domain parameters.

Hello dear,

May you share your codes for cyclic autocorrelation function?

Have a nice day.

LikeLike

Hi Dr. Spooner,

Thank you for making this blog. Are there certain qualities of a signal that are more noticeable using the Cyclic Autocorrelation Function (CAF) versus the Spectral Correlation Function (SCF)? In other words, are there situations where the usage of the CAF is favorable over the SCF?

Thank you,

Abdul

LikeLike

I think you asked this in a slightly different way in the comments to this post; see that answer. Thanks!

LikeLike

Haha, my apologies. I initially expected my comment to be viewable immediately. I don’t normally use wordpress and didn’t realize that my comment was under moderation until the second time I tried to comment.

LikeLike

Hi Dr. Spooner,

Thank you for this blog. Are there situations where it would be advantageous to utilize the Cyclic Autocorrelation Function over the Spectral Correlation Function?

Thanks

LikeLike

I’ve found the CAF to be useful when doing CSP for OFDM signals (see the

work of Octavia Dobre).

I think the general answer to your question is that the advantages of the

two functions depend on how much prior information you have about your signals.

When you don’t know anything, and you want to do RFSA, the SCF is quite useful because of FFT-based estimators like the SSCA and FAM. But if you know

quite a lot, such as a cycle frequency and some CAF lags

of interest, then just estimating the CAF over a limited range

of lags can be quite inexpensive.

LikeLike

Thank you for your response. I’ll take a look at that article and see how they used the CAF for OFDM signal detection. I’ve been meaning to learn about OFDM signals, so this paper will be a good motivator.

So in general, if we know certain characteristics of our signal (e.g. cycle frequencies, lags of interest, etc.), then the CAF would be a computationally inexpensive means of calculating Second Order Cyclostationary Features. Makes sense, thanks for the clarification!

LikeLike

Hi Dr. Spooner. Great blog. I know for ergodic signals, time average equals to mean value. So there should be two integral symbols in equation 8. Why do you delete one?

Thank you.

LikeLike

Thanks Liang. We refer to “cycloergodicity” here, which means that ensemble averages in the stochastic framework equal the output of the sine-wave extraction operator (the expected value in the fraction-of-time probability framework):

where is the random process and is a sample path thereof. When cycloergodicity holds, we can obtain the cyclic autocorrelation function directly from the second-order lag product for almost all sample paths (almost all == with probability one).

Agree?

LikeLike

What does F mean? And alpha? Can you give me some reference papers for the proved equation. I know little about cycloergodicity. Thanks.

LikeLike

is just some functional, like . denotes a cycle frequency. Or, more generally, the frequency of a finite-strength additive sine-wave component in . For information on cycloergodicity and the fraction-of-time probability framework, see The Literature [R8, R67, R68].

LikeLike

So for equation 8, can I calculate CAF with fft[x*conj(x)]? Because I find it has same form as DFT operation.

LikeLike

Well, you can

estimatethe cyclic autocorrelation by picking out one element of the FFT of the lag product. But notice that the cyclic autocorrelation function is a limiting version of a time average. Because there is no guarantee that the cycle frequency is exactly equal to a Fourier frequency in the DFT, you’ll get better results by just computing the single DFT directly. That works fine if you know the cycle frequency in advance. If you don’t, you have to search for the cycle frequencies, and that is best done using the spectral correlation function and the strip spectral correlation analyzer.LikeLike

Hello Sir,

Thank you so much for your valuable information. I just wanted to ask how to cite this tutorial?

Thanks in advance

LikeLike

Hey Sara. Do you mean to reference the CSP Blog, or a post within it, in the reference list of a published paper?

I’ve referenced it in a journal paper and a couple of conference papers. I just use the URL. So if you want to reference the CSP Blog in general, you could include an entry in your reference list that looks like this:

[1] C. M. Spooner, The Cyclostationary Signal Processing Blog, https://cyclostationary.blog.

or even

[1] https://cyclostationary.blog.

If you wanted to reference a particular post within the CSP Blog, you could get the URL from your browser. For the post you’ve commented on, it would be

[2] https://cyclostationary.blog/2015/09/28/the-cyclic-autocorrelation.

or

[2] C. M. Spooner, The Cyclostationary Signal Processing Blog, https://cyclostationary.blog/2015/09/28/the-cyclic-autocorrelation.

Does that answer your question?

I would appreciate very much this kind of citation. It will help spread the word on the Blog, which I consider a valid alternative to published papers in conventional journals and conference proceedings.

LikeLike