Introduction to Higher-Order Cyclostationarity

We’ve seen how to define second-order cyclostationarity in the time- and frequency-domains, and we’ve looked at ideal and estimated spectral correlation functions for a synthetic rectangular-pulse BPSK signal. In future posts, we’ll look at how to create simple spectral correlation estimators, but in this post I want to introduce the topic of higher-order cyclostationarity (HOCS).  This post is more conceptual in nature; for mathematical details about HOCS, see the post on cyclic cumulants.

To contrast with HOCS, we’ll refer to second-order parameters such as the cyclic autocorrelation and the spectral correlation function as parameters of second-order cyclostationarity (SOCS).

The first question we might ask is Why do we care about HOCS? And one answer is that SOCS does not provide all the statistical information about a signal that we might need to perform some signal-processing task. There are two main limitations of SOCS that drive us to HOCS.

SOCS Limitation One: Distinct Signal Types Possess Identical SOCS

Cyclostationarity can be used to automatically recognize the modulation type of a signal present in some given sampled-data set. Ideally, the entire probability structure of the signal would be compared to the probability structure of a catalog of signal types to determine the best match. But since most signals possess an infinite number of non-zero moments, we must restrict our use of the probability structure in some manner.

Many signal types possess unique SOCS, such as MSK and DSSS BPSK. However, there are some classes of signals for which the parameters of SOCS are equal up to a single scale factor. That is, their spectral correlation functions are identical except for a scale factor that applies to the entire function. Unless the transmitted power and propagation channel are accurately known in advance, there is no way to unambiguously recognize the different signals in such a class. However, since the signal types are in fact distinct, there are some probabilistic parameters that must differ between the different signals.

An important class of signals with identical SOCS are the digital QAM signals, such as QPSK, 8QAM, 16QAM, 256QAM, etc. To illustrate this idea, I’ve simulated the four signals and estimated their SOCS and HOCS. First, here are the PSDs for the four (noisy) signals:

dqam_psds

These four signals have independent and identically distributed symbols and all employ the same transmitter pulse-shaping filter (SRRC, roll-off of 1.0). So, their PSDs are identical, as is evident from the figure.

The four signals possess three non-conjugate cycle frequencies and no conjugate cycle frequencies. The non-conjugate cycle frequencies are \alpha = -f_{sym}, 0, f_{sym}, where f_{sym} = 0.1 is the symbol rate. A symbol rate of 0.1 just means that there are 10 samples in a symbol interval. So, their cycle-frequency patterns are identical, and cannot be used to distinguish among them (compare this to the cycle-frequency pattern we’ve seen for BPSK). Moreover, their spectral correlation functions are identical:

dqam_scf

But we can show mathematically that their higher-order moments and cumulants do differ significantly. The following plots show the magnitudes of the nth-order cyclic cumulant functions, which we will define and discuss in later posts:

cc_qpsk cc_dqpsk cc_16qam cc_8psk

The columns of these matrices correspond to different higher-order cumulants (think moments for now), with higher orders proceeding to the right. The rows correspond to harmonic numbers of the symbol-rate cycle frequencies. The first column is the first-order cyclostationarity (simply additive sine waves in the signal, which do not exist here), the next three columns capture the SOCS, and the remaining columns to the right capture the fourth- and sixth-order statistics.

Although the patterns for QPSK and 16QAM are the same, the particular values of the cumulants differ. So these four signals are indistinguishable using SOCS, but are clearly distinguishable using HOCS.

SOCS Limitation Two: Some Cyclostationary Signals Have no SOCS

Some signals do not possess any second-order cycle frequencies besides the non-conjugate cycle frequency of zero (all finite-power signals possess this cycle frequency), but do possess higher-order cycle frequencies. An example is duobinary signaling, which is a PSK signal type that uses a transmit pulse-shaping filter with a roll-off of zero (the excess bandwidth is zero or, viewed a bit differently, the occupied bandwidth is equal to the symbol rate). The duobinary signal has no SOCS, but does have HOCS.

How to Generalize SOCS to HOCS?

We’ve seen some motivation for generalizing SOCS to HOCS, but how exactly do we go about doing it? For a more mathematical treatment, see My Papers [5,6] and the post on cyclic cumulants. Here, we provide the flavor of the approach, and we’ll give more detail in future posts.

The theory of SOCS usually starts out by describing the periodic components of the autocorrelation function. The autocorrelation is a second-order moment for the signal under study, so it is natural to extend SOCS by looking at third-order, fourth-order, and higher-order moments. For example, we might compute the third-order temporal moment,

R_x(t,\boldsymbol{\tau}) = E[x(t+\tau_1)x(t+\tau_2)x(t+\tau_3)]. \hfill (1)

It turns out that for almost all communication signals, this moment is zero. What about the fourth-order temporal moment? For example,

R_x(t, \boldsymbol{\tau}) = E[x(t+\tau_1)x(t+\tau_2)x^*(t+\tau_3)x^*(t+\tau_4)], \hfill (2)

which is a function of the delay vector \boldsymbol{\tau} = [\tau_1\ \tau_2\ \tau_3\ \tau_4]. But here is the problem: If the signal does have SOCS, then there are additive sine-wave components in products like x(t+\tau_1)x^*(t+\tau_3) and x(t+\tau_2)x^*(t+\tau_4) so therefore there must be additive sine-wave components in the fourth-order product. But what is new in the fourth-order product that isn’t simply a result of a product of second-order sine wave components? This is the fundamental question that drives the mathematical development of higher-order cyclic cumulants in My Papers [5,6].

23 thoughts on “Introduction to Higher-Order Cyclostationarity

  1. aapocketz says:

    “Although the patterns for QPSK and 16QAM are the same, the particular values of the cumulants differ. So these four signals are indistinguishable using SOCS, but are clearly distinguishable using HOCS.”

    Could you elaborate on this concept? I have been looking into higher order statistics to classify higher order PSK. Performing a reduced dimension 4th order temporal moment function on QPSK, 8PSK, 16PSK, they all appear to result in similar surfaces.

    • I advocate using the cyclic cumulants instead of the cyclic moments. The moments for PSK tend to be similar, for the cycle frequencies that the signals have in common.

      What I meant by the quote is that if you look at the cyclic cumulant magnitudes for a set of cycle frequencies than include various orders n and numbers of conjugations m, you’ll get distinct features. This tends to work fairly well for digital QAM. If you restrict your attention to PSK, the situation gets a little more restrictive.

      First, the pattern of cycle frequencies is radically different between BPSK, QPSK, and 8PSK. So these are quite easy to distinguish just based on observing which cycle frequencies are present. These topics are covered in some detail (perhaps not enough, I’m not sure) in my three Asilomar papers (My Papers [25] [26] [28]).

      The general form of the cycle frequencies for digital QAM/PSK can be found there (and in some of my other papers, and in the work of Octavia Dobre):

      A = (n-2m)fc + k/T0

      where fc is the carrier offset frequency in the complex-baseband model and 1/T0 is the symbol rate. This formula covers all possible cycle frequencies for IID QAM/PSK, but the actual cycle frequencies exhibited by a particular signal depends on the cumulants for the symbol constellation. This is what leads to different cycle-frequency patterns for BPSK, QPSK, and 8PSK when you consider cumulants up through order n=4.

      However, 8PSK and MPSK, M>8, have identical cycle-frequency patterns up through order n=6. 8PSK differs from MPSK M>8 once you get to order n=8. So distinguishing 8PSK from 16PSK is not easily done using cyclic cumulants—you have to be willing to apply 8th-order nonlinearities to your data.

      To repeat, though, distinguishing BPSK, QPSK, and 8PSK from each other is relatively easy, and only requires cumulants up through order four.

      Back to the quoted statement, QPSK and 16QAM share the exact same set of cycle frequencies, but their cyclic-cumulant magnitudes differ, so that even though their patterns are the same, the sets of cumulants can be distinguished.

      Here are the cumulants of some common constellations (taken from one of the Asilomar papers):

      constellation cumulants

      I suppose I need to do a post just on the cumulants of PSK/QAM…

      • aapocketz says:

        Thanks for the great info. Question: 8PSK and 4PSK have different cyclic frequencies, or the same frequencies with different magnitudes?

        I am working through generating a 4th order cumulant n,m = 4,2 for a given signal. I want to do reduced dimensionality, as you have described, so I have fixed tau1 and tau2, while allowing tau3 and tau4 to vary.

        My general CTCF algorithm is iterate through tau3 and tau4, shifting the data, perform multiplys and adds based on the partitioning, and an FFT for the fourier coefficients. For the moment to cumulant relationship I have it simplified to this (x1 is the input delayed by tau1):

        fft(x1 * x2 * conj(x3 * x4) – abs(x1 * x2)^2 – 2*(x1*conj(x2))^2)

        Not 100% sure that is correct, I find the partition summation formula a bit confusing.

        Looking at this, I generally see a a big DC spike, and a spike at the symbol rate Rs=Fs/To. For M-PSK, M=2,4,8, basically the only difference that is noticeable is a magnitude change along the frequency plane. Should I see different pure cyclic frequencies pop up as I increase the M order? Should I be looking at different conjugate values, or varying tau differently?

        Thanks again!

        • Question: 8PSK and 4PSK have different cyclic frequencies, or the same frequencies with different magnitudes?

          Different cycle frequencies. They have identical cycle frequencies and cumulant magnitudes for n=2, but they have different cycle frequencies starting with n=4. For (n,m) = (4,0), QPSK has cycle frequencies 4fc, 4fc+1/T0, and 4fc-1/T0. But 8PSK has none of those.

          For the moment to cumulant relationship I have it simplified to this (x1 is the input delayed by tau1):

          fft(x1 * x2 * conj(x3 * x4) – abs(x1 * x2)^2 – 2*(x1*conj(x2))^2)

          Not 100% sure that is correct, I find the partition summation formula a bit confusing.

          If you want to look at the FFT of the fourth-order product, but with the lower-order moments subtracted off, in order to find the cumulant cycle frequencies, you should subtract products of MOMENTS from the fourth-order lag product.

          fft(x1*x2*conj(x3)*conj(x4) – Rx(t, [t1, t2]; 2, 0)*Rx(t, [t3, t4]; 2, 2) – Rx(t, [t1, t3]; 2, 1)Rx(t, [t2, t4]; 2, 1) – Rx(t, [t1, t4]; 2, 1)Rx(t, [t2, t3]; 2, 1))

          which follows from the partitions definitions and the cumulant-in-terms-of-moments formula.

          Should I see different pure cyclic frequencies pop up as I increase the M order? Should I be looking at different conjugate values, or varying tau differently?

          Not if you maintain m = 2n. That is, you’ll see the same cycle frequencies for the three signals for each of (n,m) = (2, 1), (4,2), and (6,3). But try (n,m) = (4, 0).

  2. Bessy says:

    Hi,proffessor Spooner
    When I compute C40 of bpsk and 16qam using fft,I find that there are nonzeros in 2*fc and 2*fc=+/- Rs/2 of bspk.But the papers of yours say that the peaks should only appear in (n-2m)*fc+k/T.In my opinion,I think it should also appear the peaks in 2*fc.Besides,I can’t understand why the C40 of 16qam is 0.68 in 4*fc because the C40 of 16qam is about 44 and the C40 of bpsk is about 1.26 in my sitimulation .Therefore,can you give me some advice ?
    Thanks.
    Looking forwad your reply.

    Bessy

    • Hi Bessy.

      When I compute C40 of bpsk and 16qam using fft,

      Do you mean R40? You need to do quite a bit more than use an FFT to compute the fourth-order cumulant. Also, we typically use the notation “C40” to mean the temporal cumulant.

      I find that there are nonzeros in 2*fc and 2*fc=+/- Rs/2 of bspk

      Well, if you Fourier transform x^4(t), and the signal in x(t) is BPSK, you should see a large peak at 4 times the carrier and two other peaks offset from that peak by +/- Rsym. The (4, 1) moment (R41) does have the doubled carrier though.

      I can’t understand why the C40 of 16qam is 0.68 in 4*fc because the C40 of 16qam is about 44 and the C40 of bpsk is about 1.26 in my sitimulation

      I think you saw the 0.68 number in a table in the QAM/PSK post. Those values were the cumulants for the symbol variable, not the whole QAM signal. The full cumulant also has a contribution from the integral of the product of the shifted pulse functions. For example, for 16QAM with root-raised-cosine pulses having roll-off of 0.35, the C(4,0,0) value for all delays equal to zero is 0.57. It varies with the roll-off.

      I think you’re just not quite computing the temporal cumulant yet. Are you subtracting the appropriate products of lower-order (second order) cyclic moments?

  3. Bessy says:

    Thank you for your replay. I’m so sorry that my description is not very clear.My purpose is to compute C(4,0,0).I had subtracted the products of second order cyclic moments R20 from the fourth-order cyclic moments R40.That is,C(4,0,0)=F[x*x*x*x]-3*[F(x*x)]*[F(x*x)],where F is the Fourier transform matrix and x is the input signal BPSK.My meant that I also found a large peak at 2 times the carrier and two other peaks offset from that peak by +/-0.5*Rsym ,which resulted from the second term of my equation.So I don’t understand why the paper say that the peaks just should appear in (n-2*m)*fc + k*Rsym.
    Thanks
    Looking forward your replay.
    Bessy

    • My purpose is to compute C(4,0,0).I had subtracted the products of second order cyclic moments R20 from the fourth-order cyclic moments R40.That is,C(4,0,0)=F[x*x*x*x]-3*[F(x*x)]*[F(x*x)],where F is the Fourier transform matrix and x is the input signal BPSK.

      I assume that your ‘F’ is MATLAB’s dftmtx.m. Let me know if that is incorrect.

      I think your problem is that you are not computing a cyclic cumulant. You need to compute cyclic moments first, and combine them properly to compute the cyclic cumulant. It is not simply the product of some Fourier transforms. For C(4,0,0), which means the fourth-order cumulant with 0 conjugated terms and a cycle frequency of 4fc + 0*fsym, you need to combine lower-order cyclic moments whose cycle frequencies sum to 4fc.

  4. manifest7 says:

    Dr. Spooner,

    I just wanted to thank you for this article. This really cleared up a lot of misconceptions I had about HOCS features, especially the generalization of SOCS to HOCS. Thanks again!

    • You’re welcome. It would help me, and perhaps some of my readers, if you might give us an idea of what the misconceptions were and maybe how you came to them. Just if you care to, and have the time. Thanks…

      • manifest7 says:

        Hi Dr. Spooner,

        Sure, here are a few things that stood out to me the most in your article:

        1. The need of HOCS features/statistics didn’t click until you mentioned how many communication schemes exhibit similar (or no) SOCS features. As a result, in order to get a clearer picture and properly classify a detected signal, we need more information on it, which is why we need to delve into these HOCS cyclic cumulant calculations.

        2. As we calculate HOCS features of a detected signal, we will tend to see these sinusoidal/tonal components show up. These components appear at a particular fundamental cyclic frequency, which becomes evident as we perform higher order cumulants at different degrees of conjugation. The plots shown for the digital QAM-like signals’ HOCS Statistics show the magnitudes of these sinusoidal components (color of box) for a particular harmonic of the fundamental cyclic frequency (vertical axis) at an nth-order cyclic cumulant with a particular degree of conjugation (horizontal axis). The overall shape of the cyclic cumulant is not what’s being recorded in your plots; what’s being recorded are those tones that you managed to observe and its related magnitude.

        3. The idea of calculating cyclic cumulants is analogus to calculating a random process’s moments or even taking the derivative of a random process’s characteristic function.

        These points sum the major things I misunderstood before I read this page. The realization and recording of the sinusoidal tones is what’s important when estimating HOCS Statistics.

  5. Bessy says:

    Hi, Dr.Spooner
    I want to take advantage of the cyclic cumulant functions to classify the communication signals that overlap in time domain. There are some questions I can not understand totally.
    First,does the sine waves operator E{alpha} mean that we should make the Fourier transform of the signal ,extract the discrete impulse by filtering in the frequency domain and make the inverse Fourier transform of it? If it is true, I think it is not necessary for the periodical signal to execute that operator.However,for the aperiodic signal, it seems some complicated.Therefore,I use the expectation operation to replace it,that is,dividing the signal into several segments , adding them up and then dividing the numbers of segments.I am not sure the processing is correct.Can you give me some advice about how deal with this operator in practice?In addition,does it mean that after the sine waves operator E{alpha},the signal retain the sine waves only?
    Second ,it is also about the sine waves operator.In your paper, you deduce the 2 order TMF (R(t,tau)2)of the signal in detail that consists of an AM signal and a sine waves,that is,x(t) = a(t)*cos(w1*t)+cos(w2*t),a(t) is stationary.When computing the R(t,tau)2 of it,the result includes the autocorrelation function of the a(t).Does that mean that when processing the stationary signal ,we can replace the E{alpha} by expectation operation?However,in the beginning,you say that the E{alpha} operator just needs one realization of the signal.
    Third,the paper says that TCF have signal selectivity and the TMF does not obey this very useful cumulative relation.When I simulate the Fourier transform of the 4-oder TMF and TCF of a signal consisting of a BPSK and a 16QAM signal,I find that the CTMF is identical with the CTCF.So how can I determine which function is what I need?
    Looking forward your reply.
    Thanks.

    • First, does the sine waves operator E{alpha} mean that we
      should make the Fourier transform of the signal, extract
      the discrete impulse by filtering in the frequency domain
      and make the inverse Fourier transform of it?

      Yes, except you are a little vague about the word signal . You can
      use the Fourier transform to find the additive sine-wave components provided
      they are sufficiently strong. That is, E^{\{\alpha\}} (y(t)) can be
      approximated by Fourier transforming y(t) and isolating the strong
      peaks, then inverse Fourier transforming.

      If it is true, I think it is not necessary for the periodical
      signal to execute that operator. However, for the aperiodic
      signal, it seems some complicated. Therefore, I use the
      expectation operation to replace it, that is, dividing the
      signal into several segments, adding them up and then
      dividing the numbers of segments. I am not sure the processing
      is correct.

      Right, if y(t) is periodic, then E^{\{\alpha\}} (y(t)) = y(t).
      I don’t understand your recipe here. Dividing the signal into several
      segments and averaging them will not reveal the periodic component
      unless the segment length matches the period, or a multiple of it.

      Can you give me some advice about how deal with this
      operator in practice?

      I generally don’t use it in signal-processing practice. I view it as
      more of a theoretical tool.

      In addition, does it mean that after the sine waves operator
      E{alpha}, the signal retain the sine waves only?

      Yes. The output signal from the operator is the periodic or almost-periodic
      component of its input, which by the Fourier series is nothing more than a sum
      (possibly infinite) of weighted sine waves.

      Second, it is also about the sine waves operator.
      In your paper, you deduce the 2 order TMF (R(t,tau)2) of the
      signal in detail that consists of an AM signal and a sine waves,
      that is, x(t) = a(t)*cos(w1*t)+cos(w2*t), a(t) is stationary.
      When computing the R(t,tau)2 of it, the result includes the
      autocorrelation function of the a(t). Does that mean that when
      processing the stationary signal, we can replace the
      E{alpha} by expectation operation?

      Yes, if y(t) is stationary, then the operator will simply reduce
      to the operator that extracts the time-invariant component, which is the
      average value. That result will correspond to the stochastic expected value
      provided y(t) possesses sufficient ergodic properties.

      However, in the beginning, you say that the E{alpha}
      operator just needs one realization of the signal.

      Yes. There is always the tension between the stochastic process theory,
      which involves the hypothetical ensemble of sample functions and the
      fraction-of-time (frequentist) theory, which involves the hypothetical
      single infinite-duration sample path. To bridge them requires understanding
      the ergodic (cyclo-ergodic) properties of the process. See The Literature
      [R67], [R68].

      Third, the paper says that TCF have signal selectivity
      and the TMF does not obey this very useful cumulative
      relation.When I simulate the Fourier transform of the
      4-oder TMF and TCF of a signal consisting of a BPSK and
      a 16QAM signal, I find that the CTMF is identical with
      the CTCF. So how can I determine which function is what
      I need?

      I don’t quite follow the set-up to the question. Are you saying that
      you have a signal x(t) that is the sum of a BPSK and a 16QAM
      signal? And for that signal, the TMF and TCF are identical? Do they
      share any cycle frequencies?

      I also don’t understand

      Fourier transform of the 4-oder TMF and TCF

      Are you saying you’ve successfully computed the TMF and TCF and are
      now looking at their Fourier transforms? How do you know that you
      have the correct TMF and TCF?

      I suggest sticking with a single signal in a small amount of noise
      until you are sure that you understand all the parameters, and that
      your code produces estimates that are consistent with correct understanding.
      Then start looking at combinations.

  6. Bessy says:

    I am sorry that I have forgot the last question.That is,there are two ways to compute the CTCF,one way is taking the FT of TCF,another is combining the related CTMF.About the second way,I do not understand the relation between CTMF and CTCF,especially the new cycle frequency and the old .Can you give me some suggestions?
    Thanks again.

    • That is,there are two ways to compute the CTCF,one
      way is taking the FT of TCF, another is combining the
      related CTMF. About the second way, I do not understand the
      relation between CTMF and CTCF, especially the new cycle
      frequency and the old. Can you give me some suggestions?

      Both methods require the use of the basic moment-to-cumulant
      formula, which shows how a cumulant can be expressed as a
      particular sum of weighted products of lower-order moments.

      In the case of the TCF, you have to combine various TMFs to
      find the TCF, and in the case of the CTCF, you have to combine
      various CTMFs.

      The relevant formulas are in the post on the CTCF (cyclic cumulants).

      The core idea is that to find the nth-order cyclic cumulant
      (CTCF) with cycle frequency \beta, you have to properly combine
      all lower-order cyclic moments (CTMFs) whose orders sum to n
      and whose cycle frequencies sum to \beta. The presence of arbitrary
      conjugated factors in the set of variables causes significant notational
      and conceptual confusions, though.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s