The Cyclic Autocorrelation

In this post, I introduce the cyclic autocorrelation function (CAF). The easiest way to do this is to first review the conventional autocorrelation function. Suppose we have a complex-valued signal x(t) defined on a suitable probability space. Then the mean value of x(t) is given by

M_x(t, \tau) = E[x(t + \tau)]. \hfill (1)

For stationary signals, and many cyclostationary signals, this mean value is independent of the lag parameter \tau, so that

\displaystyle M_x(t, \tau_1) = M_x(t, \tau_2) = M_x(t, 0) = M_x(t). \hfill (2)

The autocorrelation function is the correlation between the random variables corresponding to two time instants of the random signal, or

\displaystyle R_x(t_1, t_2) = E[x(t_1)x^*(t_2)]. \hfill (3)

To see how the autocorrelation varies with some particular central time t, we can use a more convenient parameterization of the two time instants t_1 and t_2, such as

\displaystyle R_x(t, \tau) = E[x(t+\tau/2)x^*(t-\tau/2)], \hfill (4)

where

\displaystyle t_1 = t+\tau/2

\displaystyle t_2 = t-\tau/2

and

\displaystyle t = (t_1 + t_2)/2

\displaystyle \tau = t_1 - t_2

So time t represents the center point of the two time instants and \tau is their separation. If the autocorrelation depends only on the separation between the two time instants \tau, and not their center point t, the signal is stationary of order two, or just stationary, and we have

\displaystyle R_x(t_1, \tau) = R_x(t_2, \tau) = R_x(0, \tau) = R_x(\tau). \hfill (5)

For nonstationary signals, on the other hand, the autocorrelation does depend on central time t. For the special case of nonstationary signals called cyclostationary signals, the autocorrelation is either a periodic function or an almost periodic function. In either case, it can be represented by a Fourier series

\displaystyle R_x(t, \tau) =  \sum_\alpha R_x^\alpha (\tau) e^{i 2 \pi \alpha t}, \hfill (6)

where R_x^\alpha(\tau) is a Fourier-series coefficient called the cyclic autocorrelation function. The Fourier frequencies \alpha are called cycle frequencies (CFs). The CAFs are obtained in the usual way for Fourier coefficients,

\displaystyle  R_x^\alpha(\tau) =  \lim_{T\rightarrow\infty} \frac{1}{T} \int_{-T/2}^{T/2} R_x(t,\tau) e^{-i 2 \pi \alpha t}\,dt. \hfill (7)

If the signal is a cycloergodic signal (or we are using fraction-of-time probability), then the CAFs can be obtained directly from a sample path (the signal itself),

\displaystyle R_x^\alpha(\tau) = \lim_{T\rightarrow\infty} \frac{1}{T} \int_{-T/2}^{T/2} x(t+\tau/2) x^* (t-\tau/2) e^{-i 2 \pi \alpha t} \, dt. \hfill (8)

For many cyclostationary signals, such as BPSK, the conjugate autocorrelation function is also non-zero (and also useful). This function is defined by

\displaystyle R_{x^*}(t, \tau) = E[x(t+\tau/2)x(t-\tau/2)], \hfill (9)

and is represented by its own Fourier series

\displaystyle R_{x^*}(t, \tau) = \sum_\alpha R_{x^*}^\alpha (\tau) e^{i 2 \pi \alpha t}. \hfill (10)

I explain in detail why we need two autocorrelation functions in the post on conjugation configurations. The problem is worse when we look at higher-order moments and cumulants, where we need n/2 + 1 functions to properly characterize a signal at order n.

Symmetric versus Asymmetric Cyclic Autocorrelation Functions

The conventional autocorrelation and cyclic autocorrelation functions are symmetric in the delay variable \tau in that it appears as +\tau/2 in one factor of the delay product and as -\tau/2 in the other:

\displaystyle x(t+\tau/2)x^*(t-\tau/2).

When attempting to estimate the cyclic autocorrelation function, using discrete-time data, the lag variable \tau and the time index variable t take on integer values, so that \tau/2 does not correspond to a known value of x(t) when \tau is odd. To work around this problem, we can operate in the frequency domain, directly estimating the spectral correlation function and inverse transforming it to obtain the cyclic autocorrelation. But we can also employ the asymmetric cyclic autocorrelation, which is friendly to discrete-time data, and then scale it appropriately to obtain the symmetric version. Let’s go through this analysis here.

The asymmetric cyclic autocorrelation is

\displaystyle \tilde{R}_x^\alpha(\tau) =  \lim_{T\rightarrow\infty} \frac{1}{T} \int_{-T/2}^{T/2} x(t) x^*(t-\tau) e^{-i2\pi \alpha t} \, dt, \hfill (11)

but it could be defined in other ways, including

\displaystyle \tilde{R}_x^\alpha(\tau) =  \lim_{T\rightarrow\infty} \frac{1}{T} \int_{-T/2}^{T/2} x(t+\tau) x^*(t) e^{-i2\pi \alpha t} \, dt. \hfill (12)

Staying with definition (11), and using the change of variables t_0 = t - \tau/2, we have

\displaystyle \tilde{R}_x^\alpha(\tau) =  \lim_{T\rightarrow\infty} \frac{1}{T} \int_{-T/2-\tau/2}^{T/2 - \tau/2} x(t_0+\tau/2) x^*(t_0-\tau/2) e^{-i2\pi \alpha (t_0 + \tau/2)} \, dt_0, \hfill (13)

or

\displaystyle \tilde{R}_x^\alpha(\tau) = e^{-i\pi\alpha\tau} R_x^\alpha(\tau) \hfill (14)

so that we can compute the asymmetric version and then scale it by a simple complex exponential to obtain the symmetric version.

And that is it! These are the basic definitions for the (second-order) probabilistic parameters of cyclostationary signals in the time-domain. In later posts, I have much to say about their utility, their estimation,  their connection to the frequency-domain parameters, and their generalization to higher-order parameters.

 

Support the CSP Blog here.

 

Author: Chad Spooner

I'm a signal processing researcher specializing in cyclostationary signal processing (CSP) for communication signals. I hope to use this blog to help others with their cyclo-projects and to learn more about how CSP is being used and extended worldwide.

38 thoughts on “The Cyclic Autocorrelation”

  1. Hi Dr. Spooner,

    Thank you for making this blog. Are there certain qualities of a signal that are more noticeable using the Cyclic Autocorrelation Function (CAF) versus the Spectral Correlation Function (SCF)? In other words, are there situations where the usage of the CAF is favorable over the SCF?

    Thank you,
    Abdul

      1. Haha, my apologies. I initially expected my comment to be viewable immediately. I don’t normally use wordpress and didn’t realize that my comment was under moderation until the second time I tried to comment.

  2. Hi Dr. Spooner,

    Thank you for this blog. Are there situations where it would be advantageous to utilize the Cyclic Autocorrelation Function over the Spectral Correlation Function?

    Thanks

    1. I’ve found the CAF to be useful when doing CSP for OFDM signals (see the work of Octavia Dobre).

      I think the general answer to your question is that the advantages of the two functions depend on how much prior information you have about your signals. When you don’t know anything, and you want to do RFSA, the SCF is quite useful because of FFT-based estimators like the SSCA and FAM. But if you know quite a lot, such as a cycle frequency and some CAF lags of interest, then just estimating the CAF over a limited range of lags can be quite inexpensive.

      1. Thank you for your response. I’ll take a look at that article and see how they used the CAF for OFDM signal detection. I’ve been meaning to learn about OFDM signals, so this paper will be a good motivator.

        So in general, if we know certain characteristics of our signal (e.g. cycle frequencies, lags of interest, etc.), then the CAF would be a computationally inexpensive means of calculating Second Order Cyclostationary Features. Makes sense, thanks for the clarification!

  3. Hi Dr. Spooner. Great blog. I know for ergodic signals, time average equals to mean value. So there should be two integral symbols in equation 8. Why do you delete one?
    Thank you.

    1. Thanks Liang. We refer to “cycloergodicity” here, which means that ensemble averages in the stochastic framework equal the output of the sine-wave extraction operator (the expected value in the fraction-of-time probability framework):

      E[F(X(t))] = E^{\{\alpha\}}[F(x(t))]

      where X(t) is the random process and x(t) is a sample path thereof. When cycloergodicity holds, we can obtain the cyclic autocorrelation function directly from the second-order lag product for almost all sample paths (almost all == with probability one).

      Agree?

      1. What does F mean? And alpha? Can you give me some reference papers for the proved equation. I know little about cycloergodicity. Thanks.

        1. F is just some functional, like F(x(t)) = x(t+\tau/2)x^*(t-\tau/2). \alpha denotes a cycle frequency. Or, more generally, the frequency of a finite-strength additive sine-wave component in F(x(t)). For information on cycloergodicity and the fraction-of-time probability framework, see The Literature [R8, R67, R68].

        1. Well, you can estimate the cyclic autocorrelation by picking out one element of the FFT of the lag product. But notice that the cyclic autocorrelation function is a limiting version of a time average. Because there is no guarantee that the cycle frequency \alpha is exactly equal to a Fourier frequency in the DFT, you’ll get better results by just computing the single DFT directly. That works fine if you know the cycle frequency \alpha in advance. If you don’t, you have to search for the cycle frequencies, and that is best done using the spectral correlation function and the strip spectral correlation analyzer.

  4. Hello Sir,
    Thank you so much for your valuable information. I just wanted to ask how to cite this tutorial?
    Thanks in advance

    1. Hey Sara. Do you mean to reference the CSP Blog, or a post within it, in the reference list of a published paper?

      I’ve referenced it in a journal paper and a couple of conference papers. I just use the URL. So if you want to reference the CSP Blog in general, you could include an entry in your reference list that looks like this:

      [1] C. M. Spooner, The Cyclostationary Signal Processing Blog, https://cyclostationary.blog.

      or even

      [1] https://cyclostationary.blog.

      If you wanted to reference a particular post within the CSP Blog, you could get the URL from your browser. For the post you’ve commented on, it would be

      [2] https://cyclostationary.blog/2015/09/28/the-cyclic-autocorrelation.

      or

      [2] C. M. Spooner, The Cyclostationary Signal Processing Blog, https://cyclostationary.blog/2015/09/28/the-cyclic-autocorrelation.

      Does that answer your question?

      I would appreciate very much this kind of citation. It will help spread the word on the Blog, which I consider a valid alternative to published papers in conventional journals and conference proceedings.

  5. Dear Dr. Spooner,

    Many thanks for such an extensive blog on cyclostationary.

    I am new to this cyclostationary analysis, though I have been reading through many your posts. In particular the one on cyclic autocorrelation.

    You wrote in one of your replies that for OFDM signal CAF be used especially when the properties of the signal are known.

    I also read several publications from Octavia Dobre, as you suggested. One of her recent publications (see the link below) is about identification of GSM and LTE signals.

    https://ieeexplore.ieee.org/document/7151426

    I have downloaded the sample LTE signal from your “Data Sets” and used it for testing the algorithm from the paper above. The algorithm seemed to be straightforward, however I didn’t obtain the expected results. Basically I computed the CCF at CF alpha. The CF for LTE is known, i.e. 2 kHz (corresponds to 0.5 ms time slot).

    I would like to know if the LTE sample file in your data set is based on OTA measurement or a simulated signal?

    Do you think I should better use SCF instead of CAF for this purpose?

    Thanks in advance.

    Best regards,
    Johann

  6. Hi Dr. Spooner,

    Thank you for your prompt reply. I see, I will stick with the CAF then.

    I computed the CAF with a zero delay only, as described in the paper. Maybe I need to include a range of delay values, as you suggested?

    So far I haven’t validated my CAF estimator with a simpler simulated signal. I guess I should do that first.

    I will keep you updated. Thanks again for your help.

    Best regards,
    Johann

    1. Thank You Dr Spooner for such a informative and guiding blog. My questio is to Mr. Johann if he can his codes with me I am also trying to work On LTE signals.

      1. Rinkoo:

        I’ve approved your comment, and good luck with Johann. However, my advice is to write your own code. You will be able to understand it much better, and apply it properly.

  7. Dear Dr. Spooner,

    I managed to get similar results to the paper using your LTE and GSM data sets. There was a bug in my original code.

    The corresponding CAF plots can be found on the links below. It’s my first time using imgur, so I don’t know for how long the links will be available.

    https://imgur.com/a/gs8a0wi

    https://imgur.com/a/9Anibdx

    For your information, I am still using a zero delay. The CFs for LTE and GSM are equal to the multiple of 2 kHz and 1733 Hz, respectively.

    Could you elaborate why you think using a range of delay values might be better?
    Thanks in advance.

    Best regards,
    Johann

  8. Hi Dr. Spooner,

    Many thanks for your reply and the nice plots.

    I will try to replicate your plots.

    Best regards,
    Johann

  9. Hi sir, nice blog for beginners to study more about cyclostationary processing. I am new to this cyclostationary processing. I tried to write a matlab code to find cyclic auto correlation function, Here are my observations please help me that i understood this concept or not.

    I generated one sine wave of 1000 samples with freq 10 and sampling freq 1000 (f/fs = 0.01)
    alpha = cyclic frequency , Tau = time shift , N = 1000

    x = sin(2*pi*f/fs*(0:N-1))

    for alpha = 0:10
    carrier = exp(-1i*2*pi*alpha*(1/fs)*(0:N-1)) ;

    for tau = 1:980
    M = N-tau;
    Rx(alpha+1,tau) = 1/M*(sum( x(1:N-tau).*x(tau+1:N).*carrier(1:M) )) ;
    end

    plot(abs(fft(Rx(alpha+1,:)))) ;
    figure

    end

    I have written the above matlab code , For each alpha i am getting the peaks , i am expecting only one peak at alpha = 0 but at different alpha’s also i am getting peaks with less amplitude values compare to alpha = 0 .

    So please suggest me am i doing correct or not?
    any way thanks a lot for ur nice blog

    1. Thanks for stopping by the CSP Blog and leaving a comment Hari. I really appreciate it.

      You should expect the cyclic autocorrelation to be non-zero even when the cycle frequency you use does not correspond to a cycle frequency of the signal.

      A couple things stand out in your code. The first is the use of a relatively short data-record length N. Try increasing it and you’ll see that most of the peaks get very small. Second, you are not quite implementing the cyclic autocorrelation as I define it in the CSP Blog (which is the most common definition) in that you are shifting up and down by \tau, not by \tau/2. When you do shift up and down by \tau, you are then dividing by N-\tau and not N-2\tau. Finally, you are not considering the actual cycle frequencies for your sinusoidal signal! Real-valued sine waves have cycle frequencies equal to zero, 2f_c and -2f_c, where f_c here is your f. So your doubled-carrier cycle frequency would be 20, but you go up to 10 only.

      But why bother with a sine-wave input? I suggest you use my rectangular-pulse BPSK signal and my second-order verification guide. That way, you know exactly what to expect.

      1. Thanks a lot for ur quick reply sir. From ur above suggestion , i am not getting this statement

        Finally, you are not considering the actual cycle frequencies for your sinusoidal signal! Real-valued sine waves have cycle frequencies equal to zero, 2f_c and -2f_c, where f_c here is your f. So your doubled-carrier cycle frequency would be 20, but you go up to 10 only.

        could u please elaborate it ?

        1. If you have a sine wave x(t) = e^{i 2 \pi f_c t} and you form x(t+\tau/2)x^*(t-\tau/2) you get a factor of e^{i 2 \pi (2 f_c) t}. This indicates that the signal has a cycle frequency of \alpha = 2f_c. Now do the same thing for a real-valued sine wave \sin(2 \pi f_c t) using Euler’s formula and you’ll see that you get components with frequencies 2f_c and -2f_c. Sine waves are trivially cyclostationary in that they are non-random, but a quadratic functional yields sine-wave components with frequencies not found in the sine wave itself. For a more detailed explanation of all this, see the post on cyclic cumulants.

          1. Many thanks for ur reply sir, After analyzing ur above suggestions ,i have some doubts ,please calrify it

            A couple things stand out in your code. The first is the use of a relatively short data-record length N. Try increasing it and you’ll see that most of the peaks get very small. Second, you are not quite implementing the cyclic autocorrelation as I define it in the CSP Blog (which is the most common definition) in that you are shifting up and down by \tau, not by \tau/2. When you do shift up and down by \tau, you are then dividing by N-\tau and not N-2\tau.

            1) After increasing the data record length N , I got very small peaks ,thanks a lot for that suggestion

            2)As u mentioned that i am not implemented cyclic autocorrelation as defined in CSP blog in that \tau shifting up and down.
            Is any wrong with this? if so where will it get affected if i am not using \tau/2.
            As per my knowledge with respect to central time (t) i am using the tau delayed sample to perform correlation.
            To perform \tau/2 based cyclic auto correlation in matlab ,it throws error since \tau/2 is fraction(if tau=1).

            3)Regarding the division factor ( N – \tau). since i am using N- tau elements for that operation ,so i am dividing with N-\tau. but u suggested to divide with (N – 2tau) .Is any reason behind that?

            So please give ur time to clarify this doubts

          2. i am not implemented cyclic autocorrelation as defined in CSP blog in that \tau shifting up and down.
            Is any wrong with this? if so where will it get affected if i am not using \tau/2.

            Well, by using that particular definition of a cyclic autocorrelation, your results will no longer be consistent with the literature and with the CSP Blog. In particular, if you Fourier transform your cyclic autocorrelation, you will not get the spectral correlation function as commonly defined. By shifting up and down by \tau (instead of the usual definition of shifting up and down by \tau/2), you effectively stretch the function in the lag variable–you are scaling time.

            To perform \tau/2 based cyclic auto correlation in matlab ,it throws error since \tau/2 is fraction(if tau=1).

            Yes, that is true. One way around this is to use the asymmetric definition that uses the following delay product:

            x(t+\tau)x^*(t)

            or

            x(t) x^*(t-\tau)

            The relationship between the conventional (symmetric in \tau) definition and the asymmetric definition is that they are the same function except for a complex exponential scaling factor related to \alpha and $\latex \tau/2$. See the comments on the cyclic autocorrelation posts here and here and here for further hints or, better, derive it yourself.

            Regarding the division factor ( N – \tau). since i am using N- tau elements for that operation ,so i am dividing with N-\tau. but u suggested to divide with (N – 2tau) .Is any reason behind that?

            OK, maybe I am wrong, but if you shift one x(t) up by \tau and the other by -\tau, the overlapping region has width N-2\tau.

            Actually, I just divide the sum by N no matter what \tau is because the range of \tau for which the function is significant is usually much much smaller than N.

  10. Hi Chad,

    I’ve tried to implement the CAF in python following pseudo-code in Professor Napolitano’s newest book page 209. I am using a BPSK like you recommend. Below is an excerpt of the CAF computation loop. The non-conjugate CAF this produces is comparable to the one presented in the BPSK CAF post. But the conjugate CAF is not. I did notice in the pseudo-code in the book uses exp(i*2*pi*alpha*N) but the formula in the CAF post uses exp(-1*i*2*pi*alpha*N). Meaning it is up shifting the frequency rather than down shifting like the formula suggests. Could this be an error? Or maybe my implementation is wrong. Any help or comment on this will be much appreciated. Thanks.

    # x_of_t is the BPSK time series

    N = np.arange(0, x_of_t.shape[0])

    ## Cyclic Autocorrelation function

    fs = 1 # Sampling Freq
    Ts = 1/fs # Define sampling period (s)

    alphaMax = 1.0 #Maximum cyclic frequency
    dAlpha = 0.1 #step in cyclic frequency

    alpha = np.arange(0, alphaMax, dAlpha) # Define alpha

    # Preallocation
    RxTauAlpha = np.zeros((alpha.shape[0], x_of_t.shape[0]), dtype=np.complex128)

    RxTauAlpha_conj = np.zeros((alpha.shape[0], x_of_t.shape[0]), dtype=np.complex128)

    cnt = 0

    for alpa in alpha:
    RxTauAlpha[cnt, :] = np.correlate(x_of_t, x_of_t * np.exp(1j * 2 * np.pi * alpa * N), mode=’same’)
    RxTauAlpha_conj[cnt, :] = np.correlate(x_of_t, np.conj(x_of_t) * np.exp(1j * 2 * np.pi * alpa * N), mode=’same’)
    cnt+=1

    1. Thanks for the comment Joel. Maybe the following responses will help; let me know.

      I am using a BPSK like you recommend.

      Are you using my BPSK signal? Or your own? I ask because I want to know the carrier frequency.

      But the conjugate CAF is not.

      In what way does your conjugate CAF not match mine?

      I did notice in the pseudo-code in the book uses exp(i*2*pi*alpha*N) but the formula in the CAF post uses exp(-1*i*2*pi*alpha*N). Meaning it is up shifting the frequency rather than down shifting like the formula suggests. Could this be an error?

      I don’t think so. He is multiplying by e^{i 2 \pi \alpha t} instead of e^{-i2\pi\alpha t} because he is using MATLAB’s xcorr.m, which, like your np.correlate, conjugates its second argument. So at that point the sign of the exponent for e is reversed. My formulas are more straightforward in that they do not hide any details behind a function call, so you see the correlation itself in detail, which requires the exponent to be negative.

      alphaMax = 1.0 #Maximum cyclic frequency
      dAlpha = 0.1 #step in cyclic frequency

      alpha = np.arange(0, alphaMax, dAlpha) # Define alpha

      You are estimating the non-conjugate CAF and the conjugate CAF for the same set of cycle frequencies, which is equal to \{0, f_{bit}, 2f_{bit}, \ldots\}. This is sufficient for the non-conjugate CAF, but leaves out cycle frequencies for the conjugate CAF. Their symmetries are different, and in general one wants to compute the conjugate CAF (or conjugate SCF) for cycle frequencies stretching from -1 to 1.
      That’s why I asked about your carrier frequency at the top of this reply.

      1. Hi Chad,

        Yes I am using your BPSK signal. Thanks for clarifying that it was no an error. Below is the google colab link where you can view, run, comment and edit the code I’ve used. Sticking to Prof. Napolitano’s method for now, hope that does no annoy you too much.
        I am saying the conjugate CAF is wrong because the non-conjugate CAF at alpha=0 (figure 1 in symmetry post) and conjugate CAF at alpha=0.1 are the same. I am trying to get my head around what is wrong. Thanks.

        https://colab.research.google.com/drive/1jdVJsyzSMlwEzKnm7KGftxDsStiIp3SE

        1. I don’t think you are doing anything wrong, per se, and I hope I didn’t come across as annoyed in my previous reply.

          Mostly we just have to interpret your results correctly, especially if/when comparing to my results in the Symmetries post.

          In particular, you are using numpy’s correlate function np.correlate, which is an asymmetrical correlation (fine!), but is a slightly different asymmetric correlation than the one I consider in this post on the cyclic autocorrelation. If you want to get to the symmetric cyclic autocorrelation, you’ll have to do something similar to what I explain above in the section called “Symmetric versus Asymmetric Cyclic Autocorrelation Functions.” I tried to make the modifications to your collaboration worksheet, but I’m not great at python. Check out my additions to your code and the section above and adjust as needed. You know you are on the right track when the magnitudes of the functions match mine. The particular correlation you use and the asymmetric compensation will determine whether the real and imag parts of your estimates match mine or not.

      1. Happens to me all the time! At work the other day, your website was mentioned by another engineer as the definitive resource for learning cyclostationary analysis. You probably don’t hear much about the word of mouth traffic you get to your blog but people are talking about it.

        1. Very nice to hear! I get a lot of “thanks for the blog,” which I very much appreciate, but of course I don’t hear what people are saying to each other.

          I haven’t forgotten your suggestion about mutual guest-authoring, by the way… Just busy.

          1. I always like to say, “better busy than bored”. You do a lot of good work so I’m not surprised you don’t have much time. Anytime you want to do something like that, I’m all ears.

Leave a Comment, Ask a Question, or Point out an Error