Introduction to Higher-Order Cyclostationarity

Why do we need or care about higher-order cyclostationarity? Because second-order cyclostationarity is insufficient for our signal-processing needs in some important cases.

We’ve seen how to define second-order cyclostationarity in the time- and frequency-domains, and we’ve looked at ideal and estimated spectral correlation functions for a synthetic rectangular-pulse BPSK signal. In future posts, we’ll look at how to create simple spectral correlation estimators, but in this post I want to introduce the topic of higher-order cyclostationarity (HOCS).  This post is more conceptual in nature; for mathematical details about HOCS, see the posts on cyclic cumulants and cyclic polyspectra. Estimators of higher-order parameters, such as cyclic cumulants and cyclic moments, are discussed in this post.

To contrast with HOCS, we’ll refer to second-order parameters such as the cyclic autocorrelation and the spectral correlation function as parameters of second-order cyclostationarity (SOCS).

The first question we might ask is Why do we care about HOCS? And one answer is that SOCS does not provide all the statistical information about a signal that we might need to perform some signal-processing task. There are two main limitations of SOCS that drive us to HOCS.

SOCS Limitation One: Distinct Signal Types Possess Identical SOCS

Cyclostationarity can be used to automatically recognize the modulation type of a signal present in some given sampled-data set. Ideally, the entire probability structure of the signal would be compared to the probability structure of a catalog of signal types to determine the best match. By ‘probability structure,’ I mean the set of all possible probability density functions, or the set of all possible probability distribution functions, or the set of all possible temporal moment functions, or the set of all possible temporal cumulant functions. But since most signals possess an infinite number of non-zero moments (and cumulants), we must restrict our use of the probability structure in some manner.

Many signal types possess unique SOCS, such as MSK and DSSS BPSK. However, there are some classes of signals for which the parameters of SOCS are equal up to a single scale factor. That is, their spectral correlation functions are identical except for a scale factor that applies to the entire function. Unless the transmitted power and propagation channel are accurately known in advance, there is no way to unambiguously recognize the different signals in such a class. However, since the signal types are in fact distinct (more precisely, the different random processes that define the signal types are distinct), there must be some probabilistic parameters that differ between the different signals.

An important class of signals with identical SOCS are the digital QAM signals, such as QPSK, 8PSK, \pi/4-DQPSK, 8QAM, 16QAM, 256QAM, etc. To illustrate this idea, I’ve simulated four digital QAM signals and estimated their SOCS and HOCS. First, here are the PSDs for the four (noisy) signals:

dqam_psds
Figure 1. Estimated (high-fidelity) power spectrum estimates for four distinct kinds of digital signal: QPSK, \pi/4-DQPSK, 8PSK, and rectangular-constellation 16QAM. Each signal possesses the same symbol rate (0.1), carrier offset frequency (0.2), and pulse-shaping function.

These four signals have independent and identically distributed symbols and all employ the same transmitter pulse-shaping filter (SRRC, roll-off of 1.0). So, their theoretical PSDs are identical, as is verified by the estimates in Figure 1.

The four signals each possess three non-conjugate cycle frequencies and no conjugate cycle frequencies. The non-conjugate cycle frequencies are \alpha = -f_{sym}, 0, f_{sym}, where f_{sym} = 0.1 is the symbol rate. A symbol rate of 0.1 just means that there are 10 samples in a symbol interval. So, the cycle-frequency patterns are identical for the four signals, and cannot be used to distinguish among them (compare this to the cycle-frequency pattern we’ve seen for BPSK). Moreover, their spectral correlation functions are identical:

dqam_scf
Figure 2. Estimated (high-fidelity) non-conjugate spectral correlation functions for the four digital signals in Figure 1. Here the cycle frequency used to form the spectral correlation estimates is the common symbol rate of 0.1.

But we can show mathematically that their higher-order moments and cumulants do differ significantly. The following plots show the magnitudes of the nth-order cyclic cumulant functions, which we define and discuss in other CSP-Blog posts:

cc_qpsk
Figure 3. Estimated cyclic-cumulant magnitudes for QPSK using various orders n, numbers of conjugated factors m, and harmonic numbers k. The delay vector is the n-element zero vector for all cyclic-cumulant estimates shown here.
cc_16qam
Figure 4. Estimated cyclic-cumulant magnitudes for 16QAM.
cc_8psk
Figure 5. Estimated cyclic-cumulant magnitudes for 8PSK
Figure 6. Estimated cyclic-cumulant magnitudes for \pi/4-DQPSK.

The columns of these matrices correspond to different higher-order cumulants (think moments for now), with higher orders proceeding to the right. The rows correspond to harmonic numbers of the symbol-rate cycle frequencies. The first column is the first-order cyclostationarity (simply additive sine waves in the signal, which do not exist here), the next three columns capture the SOCS, and the remaining columns to the right capture the fourth- and sixth-order statistics.

Although the patterns for QPSK and 16QAM are the same, the particular values of the cumulants differ. So these four signals are indistinguishable using SOCS, but are clearly distinguishable using HOCS.

(The multi-colored plots of higher-order cyclic-cumulant magnitudes above [and in the banner for the CSP Blog website] correspond to the use of a single delay vector \boldsymbol{\tau} for each considered cumulant order n: the all-zero delay vector \boldsymbol{\tau} = [\tau_1\ \tau_2 \ \ldots \tau_n] = [0\ 0\ \ \ldots \ 0]. See below for how the delay vector is used with moments and the post on cyclic temporal cumulants for the general treatment.)

SOCS Limitation Two: Some Cyclostationary Signals Have no SOCS

Some signals do not possess any second-order cycle frequencies besides the non-conjugate cycle frequency of zero (all finite-power signals possess this cycle frequency), but do possess higher-order cycle frequencies. An example is duobinary signaling, which is a PSK signal type that uses a transmit pulse-shaping filter with a roll-off of zero (the excess bandwidth is zero or, viewed a bit differently, the occupied bandwidth is equal to the symbol rate). The duobinary signal has no SOCS, but does have HOCS.

How to Generalize SOCS to HOCS?

We’ve seen some motivation for generalizing SOCS to HOCS, but how exactly do we go about doing it? For a more mathematical treatment, see My Papers [5,6] and the posts on cyclic cumulants and cyclic polyspectra. Here, we provide the flavor of the approach.

The theory of SOCS usually starts out by describing the periodic components of the autocorrelation function. The autocorrelation is a second-order moment for the signal under study, so it is natural to extend SOCS by looking at third-order, fourth-order, and higher-order moments. For example, we might compute the third-order temporal moment,

R_x(t,\boldsymbol{\tau}) = E[x(t+\tau_1)x(t+\tau_2)x(t+\tau_3)]. \hfill (1)

It turns out that for almost all communication signals, this moment is zero. What about the fourth-order temporal moment? For example,

R_x(t, \boldsymbol{\tau}) = E[x(t+\tau_1)x(t+\tau_2)x^*(t+\tau_3)x^*(t+\tau_4)], \hfill (2)

which is a function of the delay vector \boldsymbol{\tau} = [\tau_1\ \tau_2\ \tau_3\ \tau_4]. This moment is almost always non-zero for communication signals, for some values of \boldsymbol{\tau} near the origin. But here is the problem: If the signal does have SOCS, then there are additive sine-wave components in products like x(t+\tau_1)x^*(t+\tau_3) and x(t+\tau_2)x^*(t+\tau_4). This just means that the lag products can be represented by expressions of the form

\displaystyle x(t+\tau_1)x^*(t+\tau_2) = \sum_{\alpha_2} A(\alpha_2, \tau_1, \tau_2) e^{i 2 \pi \alpha_2 t} + r(t, \tau_1, \tau_2) \hfill (3)

where

\displaystyle \lim_{T\rightarrow\infty} \frac{1}{T} \int_{-T/2}^{T/2} r(t, \tau_1, \tau_2) e^{-i 2 \pi \alpha t} \, dt = 0 \hfill (4)

for all real numbers \alpha.

So if the signal exhibits SOCS, there are sine-waves present in the second-order lag products, and therefore there must be additive sine-wave components in the fourth-order product. But what is new in the fourth-order product that isn’t simply a result of a product of second-order sine wave components, if anything? This is the fundamental question that drives the mathematical development of higher-order cyclic cumulants in My Papers [5,6].

Author: Chad Spooner

I'm a signal processing researcher specializing in cyclostationary signal processing (CSP) for communication signals. I hope to use this blog to help others with their cyclo-projects and to learn more about how CSP is being used and extended worldwide.

25 thoughts on “Introduction to Higher-Order Cyclostationarity”

  1. “Although the patterns for QPSK and 16QAM are the same, the particular values of the cumulants differ. So these four signals are indistinguishable using SOCS, but are clearly distinguishable using HOCS.”

    Could you elaborate on this concept? I have been looking into higher order statistics to classify higher order PSK. Performing a reduced dimension 4th order temporal moment function on QPSK, 8PSK, 16PSK, they all appear to result in similar surfaces.

    1. I advocate using the cyclic cumulants instead of the cyclic moments. The moments for PSK tend to be similar, for the cycle frequencies that the signals have in common.

      What I meant by the quote is that if you look at the cyclic cumulant magnitudes for a set of cycle frequencies than include various orders n and numbers of conjugations m, you’ll get distinct features. This tends to work fairly well for digital QAM. If you restrict your attention to PSK, the situation gets a little more restrictive.

      First, the pattern of cycle frequencies is radically different between BPSK, QPSK, and 8PSK. So these are quite easy to distinguish just based on observing which cycle frequencies are present. These topics are covered in some detail (perhaps not enough, I’m not sure) in my three Asilomar papers (My Papers [25] [26] [28]).

      The general form of the cycle frequencies for digital QAM/PSK can be found there (and in some of my other papers, and in the work of Octavia Dobre):

      A = (n-2m)fc + k/T0

      where fc is the carrier offset frequency in the complex-baseband model and 1/T0 is the symbol rate. This formula covers all possible cycle frequencies for IID QAM/PSK, but the actual cycle frequencies exhibited by a particular signal depends on the cumulants for the symbol constellation. This is what leads to different cycle-frequency patterns for BPSK, QPSK, and 8PSK when you consider cumulants up through order n=4.

      However, 8PSK and MPSK, M>8, have identical cycle-frequency patterns up through order n=6. 8PSK differs from MPSK M>8 once you get to order n=8. So distinguishing 8PSK from 16PSK is not easily done using cyclic cumulants—you have to be willing to apply 8th-order nonlinearities to your data.

      To repeat, though, distinguishing BPSK, QPSK, and 8PSK from each other is relatively easy, and only requires cumulants up through order four.

      Back to the quoted statement, QPSK and 16QAM share the exact same set of cycle frequencies, but their cyclic-cumulant magnitudes differ, so that even though their patterns are the same, the sets of cumulants can be distinguished.

      Here are the cumulants of some common constellations (taken from one of the Asilomar papers):

      constellation cumulants

      I suppose I need to do a post just on the cumulants of PSK/QAM…

      1. Thanks for the great info. Question: 8PSK and 4PSK have different cyclic frequencies, or the same frequencies with different magnitudes?

        I am working through generating a 4th order cumulant n,m = 4,2 for a given signal. I want to do reduced dimensionality, as you have described, so I have fixed tau1 and tau2, while allowing tau3 and tau4 to vary.

        My general CTCF algorithm is iterate through tau3 and tau4, shifting the data, perform multiplys and adds based on the partitioning, and an FFT for the fourier coefficients. For the moment to cumulant relationship I have it simplified to this (x1 is the input delayed by tau1):

        fft(x1 * x2 * conj(x3 * x4) – abs(x1 * x2)^2 – 2*(x1*conj(x2))^2)

        Not 100% sure that is correct, I find the partition summation formula a bit confusing.

        Looking at this, I generally see a a big DC spike, and a spike at the symbol rate Rs=Fs/To. For M-PSK, M=2,4,8, basically the only difference that is noticeable is a magnitude change along the frequency plane. Should I see different pure cyclic frequencies pop up as I increase the M order? Should I be looking at different conjugate values, or varying tau differently?

        Thanks again!

        1. Question: 8PSK and 4PSK have different cyclic frequencies, or the same frequencies with different magnitudes?

          Different cycle frequencies. They have identical cycle frequencies and cumulant magnitudes for n=2, but they have different cycle frequencies starting with n=4. For (n,m) = (4,0), QPSK has cycle frequencies 4fc, 4fc+1/T0, and 4fc-1/T0. But 8PSK has none of those.

          For the moment to cumulant relationship I have it simplified to this (x1 is the input delayed by tau1):

          fft(x1 * x2 * conj(x3 * x4) – abs(x1 * x2)^2 – 2*(x1*conj(x2))^2)

          Not 100% sure that is correct, I find the partition summation formula a bit confusing.

          If you want to look at the FFT of the fourth-order product, but with the lower-order moments subtracted off, in order to find the cumulant cycle frequencies, you should subtract products of MOMENTS from the fourth-order lag product.

          fft(x1*x2*conj(x3)*conj(x4) – Rx(t, [t1, t2]; 2, 0)*Rx(t, [t3, t4]; 2, 2) – Rx(t, [t1, t3]; 2, 1)Rx(t, [t2, t4]; 2, 1) – Rx(t, [t1, t4]; 2, 1)Rx(t, [t2, t3]; 2, 1))

          which follows from the partitions definitions and the cumulant-in-terms-of-moments formula.

          Should I see different pure cyclic frequencies pop up as I increase the M order? Should I be looking at different conjugate values, or varying tau differently?

          Not if you maintain m = 2n. That is, you’ll see the same cycle frequencies for the three signals for each of (n,m) = (2, 1), (4,2), and (6,3). But try (n,m) = (4, 0).

  2. Hi,proffessor Spooner
    When I compute C40 of bpsk and 16qam using fft,I find that there are nonzeros in 2*fc and 2*fc=+/- Rs/2 of bspk.But the papers of yours say that the peaks should only appear in (n-2m)*fc+k/T.In my opinion,I think it should also appear the peaks in 2*fc.Besides,I can’t understand why the C40 of 16qam is 0.68 in 4*fc because the C40 of 16qam is about 44 and the C40 of bpsk is about 1.26 in my sitimulation .Therefore,can you give me some advice ?
    Thanks.
    Looking forwad your reply.

    Bessy

    1. Hi Bessy.

      When I compute C40 of bpsk and 16qam using fft,

      Do you mean R40? You need to do quite a bit more than use an FFT to compute the fourth-order cumulant. Also, we typically use the notation “C40” to mean the temporal cumulant.

      I find that there are nonzeros in 2*fc and 2*fc=+/- Rs/2 of bspk

      Well, if you Fourier transform x^4(t), and the signal in x(t) is BPSK, you should see a large peak at 4 times the carrier and two other peaks offset from that peak by +/- Rsym. The (4, 1) moment (R41) does have the doubled carrier though.

      I can’t understand why the C40 of 16qam is 0.68 in 4*fc because the C40 of 16qam is about 44 and the C40 of bpsk is about 1.26 in my sitimulation

      I think you saw the 0.68 number in a table in the QAM/PSK post. Those values were the cumulants for the symbol variable, not the whole QAM signal. The full cumulant also has a contribution from the integral of the product of the shifted pulse functions. For example, for 16QAM with root-raised-cosine pulses having roll-off of 0.35, the C(4,0,0) value for all delays equal to zero is 0.57. It varies with the roll-off.

      I think you’re just not quite computing the temporal cumulant yet. Are you subtracting the appropriate products of lower-order (second order) cyclic moments?

  3. Thank you for your replay. I’m so sorry that my description is not very clear.My purpose is to compute C(4,0,0).I had subtracted the products of second order cyclic moments R20 from the fourth-order cyclic moments R40.That is,C(4,0,0)=F[x*x*x*x]-3*[F(x*x)]*[F(x*x)],where F is the Fourier transform matrix and x is the input signal BPSK.My meant that I also found a large peak at 2 times the carrier and two other peaks offset from that peak by +/-0.5*Rsym ,which resulted from the second term of my equation.So I don’t understand why the paper say that the peaks just should appear in (n-2*m)*fc + k*Rsym.
    Thanks
    Looking forward your replay.
    Bessy

    1. My purpose is to compute C(4,0,0).I had subtracted the products of second order cyclic moments R20 from the fourth-order cyclic moments R40.That is,C(4,0,0)=F[x*x*x*x]-3*[F(x*x)]*[F(x*x)],where F is the Fourier transform matrix and x is the input signal BPSK.

      I assume that your ‘F’ is MATLAB’s dftmtx.m. Let me know if that is incorrect.

      I think your problem is that you are not computing a cyclic cumulant. You need to compute cyclic moments first, and combine them properly to compute the cyclic cumulant. It is not simply the product of some Fourier transforms. For C(4,0,0), which means the fourth-order cumulant with 0 conjugated terms and a cycle frequency of 4fc + 0*fsym, you need to combine lower-order cyclic moments whose cycle frequencies sum to 4fc.

  4. Dr. Spooner,

    I just wanted to thank you for this article. This really cleared up a lot of misconceptions I had about HOCS features, especially the generalization of SOCS to HOCS. Thanks again!

    1. You’re welcome. It would help me, and perhaps some of my readers, if you might give us an idea of what the misconceptions were and maybe how you came to them. Just if you care to, and have the time. Thanks…

      1. Hi Dr. Spooner,

        Sure, here are a few things that stood out to me the most in your article:

        1. The need of HOCS features/statistics didn’t click until you mentioned how many communication schemes exhibit similar (or no) SOCS features. As a result, in order to get a clearer picture and properly classify a detected signal, we need more information on it, which is why we need to delve into these HOCS cyclic cumulant calculations.

        2. As we calculate HOCS features of a detected signal, we will tend to see these sinusoidal/tonal components show up. These components appear at a particular fundamental cyclic frequency, which becomes evident as we perform higher order cumulants at different degrees of conjugation. The plots shown for the digital QAM-like signals’ HOCS Statistics show the magnitudes of these sinusoidal components (color of box) for a particular harmonic of the fundamental cyclic frequency (vertical axis) at an nth-order cyclic cumulant with a particular degree of conjugation (horizontal axis). The overall shape of the cyclic cumulant is not what’s being recorded in your plots; what’s being recorded are those tones that you managed to observe and its related magnitude.

        3. The idea of calculating cyclic cumulants is analogus to calculating a random process’s moments or even taking the derivative of a random process’s characteristic function.

        These points sum the major things I misunderstood before I read this page. The realization and recording of the sinusoidal tones is what’s important when estimating HOCS Statistics.

  5. Hi, Dr.Spooner
    I want to take advantage of the cyclic cumulant functions to classify the communication signals that overlap in time domain. There are some questions I can not understand totally.
    First,does the sine waves operator E{alpha} mean that we should make the Fourier transform of the signal ,extract the discrete impulse by filtering in the frequency domain and make the inverse Fourier transform of it? If it is true, I think it is not necessary for the periodical signal to execute that operator.However,for the aperiodic signal, it seems some complicated.Therefore,I use the expectation operation to replace it,that is,dividing the signal into several segments , adding them up and then dividing the numbers of segments.I am not sure the processing is correct.Can you give me some advice about how deal with this operator in practice?In addition,does it mean that after the sine waves operator E{alpha},the signal retain the sine waves only?
    Second ,it is also about the sine waves operator.In your paper, you deduce the 2 order TMF (R(t,tau)2)of the signal in detail that consists of an AM signal and a sine waves,that is,x(t) = a(t)*cos(w1*t)+cos(w2*t),a(t) is stationary.When computing the R(t,tau)2 of it,the result includes the autocorrelation function of the a(t).Does that mean that when processing the stationary signal ,we can replace the E{alpha} by expectation operation?However,in the beginning,you say that the E{alpha} operator just needs one realization of the signal.
    Third,the paper says that TCF have signal selectivity and the TMF does not obey this very useful cumulative relation.When I simulate the Fourier transform of the 4-oder TMF and TCF of a signal consisting of a BPSK and a 16QAM signal,I find that the CTMF is identical with the CTCF.So how can I determine which function is what I need?
    Looking forward your reply.
    Thanks.

    1. First, does the sine waves operator E{alpha} mean that we
      should make the Fourier transform of the signal, extract
      the discrete impulse by filtering in the frequency domain
      and make the inverse Fourier transform of it?

      Yes, except you are a little vague about the word signal . You can
      use the Fourier transform to find the additive sine-wave components provided
      they are sufficiently strong. That is, E^{\{\alpha\}} (y(t)) can be
      approximated by Fourier transforming y(t) and isolating the strong
      peaks, then inverse Fourier transforming.

      If it is true, I think it is not necessary for the periodical
      signal to execute that operator. However, for the aperiodic
      signal, it seems some complicated. Therefore, I use the
      expectation operation to replace it, that is, dividing the
      signal into several segments, adding them up and then
      dividing the numbers of segments. I am not sure the processing
      is correct.

      Right, if y(t) is periodic, then E^{\{\alpha\}} (y(t)) = y(t).
      I don’t understand your recipe here. Dividing the signal into several
      segments and averaging them will not reveal the periodic component
      unless the segment length matches the period, or a multiple of it.

      Can you give me some advice about how deal with this
      operator in practice?

      I generally don’t use it in signal-processing practice. I view it as
      more of a theoretical tool.

      In addition, does it mean that after the sine waves operator
      E{alpha}, the signal retain the sine waves only?

      Yes. The output signal from the operator is the periodic or almost-periodic
      component of its input, which by the Fourier series is nothing more than a sum
      (possibly infinite) of weighted sine waves.

      Second, it is also about the sine waves operator.
      In your paper, you deduce the 2 order TMF (R(t,tau)2) of the
      signal in detail that consists of an AM signal and a sine waves,
      that is, x(t) = a(t)*cos(w1*t)+cos(w2*t), a(t) is stationary.
      When computing the R(t,tau)2 of it, the result includes the
      autocorrelation function of the a(t). Does that mean that when
      processing the stationary signal, we can replace the
      E{alpha} by expectation operation?

      Yes, if y(t) is stationary, then the operator will simply reduce
      to the operator that extracts the time-invariant component, which is the
      average value. That result will correspond to the stochastic expected value
      provided y(t) possesses sufficient ergodic properties.

      However, in the beginning, you say that the E{alpha}
      operator just needs one realization of the signal.

      Yes. There is always the tension between the stochastic process theory,
      which involves the hypothetical ensemble of sample functions and the
      fraction-of-time (frequentist) theory, which involves the hypothetical
      single infinite-duration sample path. To bridge them requires understanding
      the ergodic (cyclo-ergodic) properties of the process. See The Literature
      [R67], [R68].

      Third, the paper says that TCF have signal selectivity
      and the TMF does not obey this very useful cumulative
      relation.When I simulate the Fourier transform of the
      4-oder TMF and TCF of a signal consisting of a BPSK and
      a 16QAM signal, I find that the CTMF is identical with
      the CTCF. So how can I determine which function is what
      I need?

      I don’t quite follow the set-up to the question. Are you saying that
      you have a signal x(t) that is the sum of a BPSK and a 16QAM
      signal? And for that signal, the TMF and TCF are identical? Do they
      share any cycle frequencies?

      I also don’t understand

      Fourier transform of the 4-oder TMF and TCF

      Are you saying you’ve successfully computed the TMF and TCF and are
      now looking at their Fourier transforms? How do you know that you
      have the correct TMF and TCF?

      I suggest sticking with a single signal in a small amount of noise
      until you are sure that you understand all the parameters, and that
      your code produces estimates that are consistent with correct understanding.
      Then start looking at combinations.

  6. I am sorry that I have forgot the last question.That is,there are two ways to compute the CTCF,one way is taking the FT of TCF,another is combining the related CTMF.About the second way,I do not understand the relation between CTMF and CTCF,especially the new cycle frequency and the old .Can you give me some suggestions?
    Thanks again.

    1. That is,there are two ways to compute the CTCF,one
      way is taking the FT of TCF, another is combining the
      related CTMF. About the second way, I do not understand the
      relation between CTMF and CTCF, especially the new cycle
      frequency and the old. Can you give me some suggestions?

      Both methods require the use of the basic moment-to-cumulant
      formula, which shows how a cumulant can be expressed as a
      particular sum of weighted products of lower-order moments.

      In the case of the TCF, you have to combine various TMFs to
      find the TCF, and in the case of the CTCF, you have to combine
      various CTMFs.

      The relevant formulas are in the post on the CTCF (cyclic cumulants).

      The core idea is that to find the nth-order cyclic cumulant
      (CTCF) with cycle frequency \beta, you have to properly combine
      all lower-order cyclic moments (CTMFs) whose orders sum to n
      and whose cycle frequencies sum to \beta. The presence of arbitrary
      conjugated factors in the set of variables causes significant notational
      and conceptual confusions, though.

  7. Hi, professor Spooner
    I have trouble doing the estimation of fourth order cyclic cumulants.So, can you please give us the code.

  8. Greetings again Chad; you have one comment here about HOCS that has me intrigued (the duobinary comment); do you see much effect of pulse shape filtering on the statistical significance of any of the SOCS or HOCS peaks? With some (simplified) processes I’m running I seem to see the peaks weaken as the SRRC coefficient approaches 0.25 or 0.2 (compared with a more typical 0.35; though there is a trend towards lower values as bandwidth efficiency becomes more of a premium). Particularly with 8PSK, as taking things to the 8th power of course is the most noise sensitive. Intuitively this makes some sense as I know that clock recovery in the modems (which is similar to this processing) gets harder as the roll off gets sharper. Any thoughts?

    1. Yes, I do see a strong effect on cycle-frequency detection due to decreasing roll-off values in square-root raised-cosine PSK/QAM. When the roll-off R goes to zero, the non-conjugate spectral correlation function for the symbol-rate cycle frequency goes to zero (as a function of frequency f). So, for the same signal and noise power levels, the feature to be detected gets smaller and smaller relative to noise-induced and self-induced variance in the spectral correlation function estimate.

      Some of this is captured mathematically and quantitatively in my post on square-root raised-cosine PSK/QAM, which can be found here.

  9. Hi Chad,

    Great post! I was wondering how you made the plots for the cyclic cumulant functions for the different signals. As I understand it, the cumulants are also a function of the delay variable tau, which is not shown on the plots. I would appreciate your clarification on this.

    Thanks

    1. MS: Thank you for your interest and the comment.

      I’ve updated the post to explain a little bit about the delay vector \boldsymbol{\tau}. Yes, the cyclic moments and cyclic cumulants are functions of a delay vector \boldsymbol{\tau} = [\tau_1\ \tau_2\ \ldots \ \tau_n]. For the cyclic-cumulant plots in the post, I used the all-zero delay vector for each order n. It turns out that the cyclic-cumulant magnitudes are almost always maximum for that choice of delay vector. Exceptions include PSK/QAM with rectangular pulses.

  10. Hi Chad

    given this statement copied from above) which is a function of the delay vector \boldsymbol{\tau} = [\tau_1\ \tau_2\ \tau_3\ \tau_4]. But here is the problem: If the signal does have SOCS, then there are additive sine-wave components in products like x(t+\tau_1)x^*(t+\tau_3) and x(t+\tau_2)x^*(t+\tau_4) so therefore there must be additive sine-wave components in the fourth-order product.

    What does that mean to have additive sine-wave components? If we multiply signals in the time domain this is equivalent to convolution in the frequency domain. Multiply four signals is equivalent to 4 convolutions in the frequency domain. Where does the additive sine-wave components from lower 2nd order multiplies remain after 4th order multiplies? Thanks

    Joe Scanlan

    1. Joe:

      I updated the post to explain a little bit more about what I mean by “finite-strength additive sine-wave components.” These remarks about lower-order and higher-order lag products and their sine-wave components are remarks about time-domain behavior using infinite-time averages. Moreover, the signals involved are power signals, not energy signals, and so the convolution theorem (“multiplication in the time domain is convolution in the frequency domain) is difficult to apply—cyclostationary signals do not possess, in general, Fourier transforms. That’s why we have to work so much with infinite-time averages or the stochastic expectation operator.

      Let me know if my clarifying comments are sufficient. Also, remember that the “Introduction to Higher-Order Cyclostationarity” post is more qualitative in nature. The “Cyclic Temporal Cumulants” post is more mathematical and might very well help you with your question here.

  11. Hi Chad,

    I have implemented a ‘tetris’ plot generator function that produces HOC plots similar to the ones you have above. I would share photos but I’m not savvy enough with this interface to figure out how to upload them. 🙂

    My tetris plots for QPSK, 8PSK, and 16QAM look very much like your theoretical plots (except for what looks like low-power noise is some bins), but my piBy4-DQPSK looks nothing like yours. The results i’m getting look much more like the 16QAM. Is it correct to assume you are using SRRC pulse shaping for all of the signal that you plotted?

    FYSA, I have generated these plots using the moment-to-cumulant formula (equation 35 in the Cyclic Temporal Cumulants post), the direct method (equation 32 in the Cyclic Temporal Cumulants post), and using the DQAM pmf (equation 4 in Cyclostationarity of Digital QAM and PSK). None of these methods produce the correct tetris plot for piBy4-DQPSK. I have also tried passing your “scene_dqpsk” binary signal from your signal gallery through each of the functions, but this produces nothing significantly different.

    Could you please explain how you produced your piBy4-DQPSK plot, or at least point me in the right direction?

    Edit by the CSP Blog Management: WordPress does not allow commenters to insert images. Here are the images Laura wished to insert into this comment:

    Constellation for \pi/4-DQPSK:

    null

    “Tetris” plot for QPSK:

    null

    “Tetris” plot for \pi/4-DQPSK:

    null

    1. Hey Laura! Thanks for stopping by the CSP Blog and the great question.

      Your constellation looks right to me, as does the “tetris” plot (cyclic-cumulant magnitudes for delay vectors that are all zeros) for QPSK. The reason that your \pi/4-DQPSK tetris plot differs from mine is likely that you are not using the correct cycle frequencies for \pi/4-DQPSK. It does not exhibit the same \alpha(n, m, k) cycle frequencies as MPSK, MQAM, or anything else, really. Have you mathematically analyzed the signal to determine its true cycle frequencies?

Leave a Comment, Ask a Question, or Point out an Error

Discover more from Cyclostationary Signal Processing

Subscribe now to keep reading and get access to the full archive.

Continue reading