In this post let’s consider the difference between modeling a communication signal as stationary or as cyclostationary.
There are two contexts for this kind of issue. The first is when someone recognizes that a particular signal model is cyclostationary, and then takes some action to render it stationary (sometimes called ‘stationarizing the signal’). They then proceed with their analysis or algorithm development using the stationary signal model. The second context is when someone applies stationary-signal processing to a cyclostationary signal model, either without knowing that the signal is cyclostationary, or perhaps knowing but not caring.
At the center of this topic is the difference between the mathematical object known as a random process (or stochastic process) and the mathematical object that is a single infinite-time function (or signal or time-series).
A related paper is The Literature [R68], which discusses the pitfalls of applying tools meant for stationary signals to the samples of cyclostationary signals.
Stationarizing a Random Process Using Phase Randomization
Suppose we have our old friend the textbook pulse-amplitude modulated (PAM) signal given by
where is the pulse function (for example, rectangular or square-root raised-cosine),
is the symbol index, and
is the symbol sequence, typically drawn from a finite alphabet such as
. Textbook signals employ independent and identically distributed symbols. Figure 1 shows some examples of pulse functions that we’ll encounter throughout this post.

The signal (time-series) in (1) is cyclostationary, by our lights, as we’ve seen in many posts at the CSP Blog. For example, for the binary alphabet
,
is the complex envelope of an RF BPSK signal. When the alphabet is
, the signal is QPSK.
In conventional stochastic process theory, is conceived of as an ensemble of signals in some probabilistic sample space. For each possible sequence
, there is a corresponding
, called a sample path (or just ‘signal’). When you calculate probabilistic parameters, such as the mean, moments, and cumulants, you do so by averaging the sample paths–which is called averaging over the ensemble. This is done by applying the expectation operator
. For
in (1) above, averaging over the ensemble produces periodically time-varying results (cyclostationarity). Let’s look at the autocorrelation for the BPSK signal version, for example,
which follows because if
and is zero otherwise for BPSK, which itself is a consequence of our assumption that the symbols are independent and identically distributed random variables. Is
periodic? Yes, with period
. Replace
with
and rename the summation index, and you have the same quantity. If the autocorrelation was instead independent of time, and the mean was too, the process is referred to as stationary.
People have lots of tools to apply when they have a stationary process, so in the past many looked for ways to render a non-stationary process stationary (‘stationarize’) to enable the fruitful use of those tools. We can stationarize our PAM signal by adding a random variable to the pulse train. Let’s call it (because it relates to time). The new signal is given by
where the random variable has a uniform distribution on the interval
for example. Is this signal stationary? To check, let’s compute the first- and second-order moments both with and without the phase-randomizing variable
.
Stationarizing the Mean
We start with the mean, or first-order moment, of . Using the original signal, we have
So this mean value depends on time and is also periodic for all
and nontrivially periodic for some choices for
, assuming that
. For most pulse-amplitude-modulated signals, it is indeed true that
, but not for some, such as on-off keying (OOK), where the symbols are drawn from the set
. In any case, what do I mean by nontrivially periodic?
Let’s think about several different pulse functions , as shown in Figure 1.
First, consider the highly textbook-ish case of rectangular . Here
is
for
, and is zero otherwise. In such a case
for all
, and therefore the mean value is independent of time. But for a more realistic pulse like the half-cosine pulse, the sum of shifted pulses in (8) leads to a periodic function that is not simply a constant: it is nontrivially periodic. For
and
, we have the mean values (8) for our considered pulses in Figures 2-6.





For the practical square-root raised-cosine pulse, and the related raised-cosine (‘Nyquist’) pulse, it turns out that the sum of shifted pulses is very close to a constant, but for the half-cosine and Manchester pulses, the sum is clearly nontrivially periodic. They are all periodic, but some are either exactly a constant or very close to a constant, and those are trivially periodic.
The periodicities give rise to spectral lines, which we can visualize by taking the Fourier transform of the mean (8) and plotting its magnitude. For the rectangular and half-cosine pulses, we obtain the plots in Figures 7 and 8.


We can force the probabilistic mean to be independent of time for any pulse using the phase-randomizing variable . Let’s go through the math.
Our random process that includes a phase-randomizing random variable is now given by
where the probability density function for is
, which we will make specific later. The goal is to find the expected value of
, and then see how the distribution of
(that is,
) affects that expected value–for which distributions is the expected value of
a constant function of time? Applying the expectation operator to the signal starts us off,
which follows by statistical independence between and
and the linearity of the expected value operator.
For any value of , the function within the backets in (12) is periodic in
with period
, so it can be exactly represented as a Fourier series
with Fourier coefficients
where represents some interval of length
,
Continuing with the Fourier coefficient,
Using the substitution and then renaming
as
we obtain the following expression for the Fourier coefficient
Next, make the substitution for given by
, then rename
by
again, yielding
Let’s define a new symbol for the integral of the sine-wave-weighted pulse function,
which means we have an expression for the Fourier series representation of ,
With this expression we can return to the expected value of the signal,
So, what is the expected value ? Let the random variable
have uniform distribution on the interval
. Then we compute the expected value easily as
For ,
At long last, then, we have
So, if the distribution of the phase-randomizing random variable is uniform on an interval with length equal to the symbol interval
, then the resulting random process has a mean value that is independent of time for all pulses
.
Let’s check this result before proceeding (and there is a lot more to proceed with). Keep it simple: Does this result make sense for a rectangular pulse ? Such a pulse is equal to one on the interval
, so we can evaluate (30) by inspection:
. So the mean of the signal is the average value of the symbols, which makes good sense.
Let’s now look at the case in which the distribution of the phase-randomizing random variable is uniform on the interval
for
and
. So we go back to the general formula (26) and evaluate the expectation with this new distribution for
.
We want to evaluate for the more general uniform distribution. For
, the expected value is one, as before, because
for any non-random constant
. For
, we have
(which checks because it is zero for )
Putting it all together, we have the mean value
A remaining check on the validity of (35) is that it matches our previous result when (no phase randomization). I’ll leave that to you.
In conclusion, the mean is a non-constant function of time, generally, when the phase-randomizing random variable is uniform but on a smaller interval than the symbol interval , and is a constant function of time if the phase-randomizing random variable is uniform on
. Figure 9 shows an example featuring OOK (
) and half-cosine pulses. As
increases to
, the time-variation of the mean vanishes. The final constant value (for
) is equal to
. The lack of symmetry is due to the lack of symmetry in the density function
.

Stationarizing the Second-Order Moment
Since most of the communication signals we observe, capture, and process have zero mean values, what we really want to understand is how phase randomization is used to stationarize the second-order moment–the autocorrelation function. That’s where the cyclostationarity property that we make such good use of here at the CSP Blog meets its end. So let’s do the math for stationarizing the second-order moment here. The form is similar to what we went through for the first-order moment (the mean value) above, so I’ll be a little less detailed.
Our signal is (9), which is repeated here for convenience
The non-conjugate (conventional) autocorrelation function, defined within the random-process framework, is given by
Assuming the pulse function itself is non-random, and that the symbols are statistically independent of the phase-randomizing random variable
(assumptions we also made in the case of the mean value analysis above), we have
Assuming, as before, independent and identically distributed symbols and
(which rules out OOK here, which is OK, OK?), the symbol expectation simplifies to
The sum over in the brackets in (41) is a periodic function of time with period
. Unlike the case of the mean value, the periodicity here depends on the autocorrelation lag variable
so that, for example, the rectangular-pulse signal can lead to a nontrivial periodicity. Let’s show some plots of the sum over
in (41) for
and
to fix the idea that this autocorrelation is really periodic (and so therefore we must ruin that periodicity using phase randomization to get back to stationarity). That is, for non-random
in (41), the expectation of the sum is just the sum, and so the autocorrelation is just the scaled sum. Examples are shown in Figures 10-12.



The periodicity of these autocorrelation functions is evident by inspection of the graphs in Figures 10-12, and a simple Fourier analysis shows the exact period to be , as shown in Figures 13-15. These figures are really showing the cycle frequencies for the non-conjugate cyclic autocorrelations, which are, recall, the Fourier coefficients of the Fourier-series decomposition of the time-varying moment.



The analysis approach to evaluating the expectation in (41) is the same as we used for the mean-value analysis: represent the sum, which is periodic with period , as a Fourier series, then apply the expectation.
The the Fourier coefficient for the Fourier-series representation of
is
After following steps similar to those for the mean value above, the Fourier coefficient is found to be
So the Fourier series is given by
The autocorrelation is
So we’re back to the question: What is ? The answer hasn’t changed, fortunately for us. Using a uniform distribution for
with width
, as before, we obtain
Finally, then, the autocorrelation of the phase-randomized signal is given by
In the case of , only the
term in the sum in (53) is non-zero (that term is equal to one), so the autocorrelation is not a function of time:
Otherwise, the autocorrelation is dependent on time .
Let’s try to check this result. We already know the PSD for the signal is simply
where . Does our result (55) provide the same answer? Recalling that the PSD is the Fourier transform of the autocorrelation,
which checks.
All we’ve done so far is show that the mean and non-conjugate autocorrelation for a baseband complex PAM signal are time-invariant if you use a phase-randomizing random variable with a uniform distribution. Let’s get a little more general and connect this basic result to cyclostationary signal processing and to more realistic signal models.
Connection to the non-Conjugate and conjugate autocorrelation functions of CSP
Suppose our signal model includes effects from imperfect downconversion (synchronization):
Here is the symbol-clock phase,
is the carrier frequency offset, and
is the carrier phase. When the only random variable in (65) is the symbol
, then we know the non-conjugate and conjugate cyclic autocorrelations, the spectral correlation functions, and the higher-order cyclic cumulants. We know that the cycle frequencies exhibited by the signal (65) follow the form
Now suppose that is a random variable, as before, but that
and
are non-random (unknown constants). Have we stationarized the non-conjugate and conjugate autocorrelation functions?
So this is the same kind of expression we already stationarized with except for the factor
, which is non-random by assumption, so if
is a uniform random variable on
, the non-conjugate autocorrelation (expectation of (67)) will be stationarized.
The conjugate autocorrelation is the expectation of a different delay product
Let’s let be random. Assuming the symbol-clock-phase variable
and the carrier phase variable
are independent, we confront three expectations when we apply the expectation to the delay product
where (which is often, but not always, zero). The expectation over
will render the third factor time-invariant, as we’ve painstakingly shown already. But we now have to confront the time-variant term corresponding to the expectation over
.
The delay variable is not quite enough to render the entire conjugate autocorrelation time-invariant–we need to consider the carrier phase random variable.
What is a condition on so that
? Assume
is uniformly distributed on the interval
(sound familiar?) Then
Now for
where
is an integer. So we can pick
to yield the uniform distribution on
. And with that choice, the conjugate autocorrelation is stationarized.
Stationarizing Higher-Order Moments and Cumulants
We see that if we use two phase-randomizing random variables, the symbol-clock phase and the carrier phase
, we can render the time-varying non-conjugate and conjugate autocorrelation function time invariant. This also means that the entire non-conjugate spectral correlation function reduces to the PSD. Staying within that random-process framework, then, we can use any methods or results applicable to stationary random processes on our signal.
But what about the higher-order statistics? What about the cyclic moments and cyclic cumulants? (What’s your guess?)
The time-varying th-order moment function is the expectation
If , the time-dependency related to
disappears, as does any dependency on
We’re left with
We can find all the conditions on such that the expected value over the symbols is non-zero. For each, we can evaluate the expectation over
as before, and this will render each term time-invariant.
If , we’ll always have the factor
, and we can choose the carrier-phase random variable to force this expectation equal to zero.
In short, the same phase-randomizing random variables we used in the second-order (autocorrelation) case will also force all the higher-order temporal moment functions to be either zero or time-invariant.
Another Way to Stationarize a Cyclostationary Communication Signal: Assume Perfect Knowledge of Synchronization ParameterS and Sample it
Sometimes you’ll see in published papers a signal model that corresponds to application of various signal-processing operations on an actual received signal–the received signal is preprocessed before modulation-recognition begins, for example. If we assume we know, or can estimate, the particular value of the symbol-clock phase parameter , the carrier frequency offset
, and the carrier phase
, as well as the pulse-shaping function
, then we can process the signal to remove the effect of the carrier and carrier phase (multiply by
), apply a perfect matched filter, and sample at the optimal sampling instants. This leads to a signal model such as
In this way, we obtain a noisy sequence of symbols, which is typically stationary (it might not be, though, if the symbols contain periodically repeated sequences or some other deviation from independent and identically distributed symbols).
I find this kind of move strange because it assumes high SNR and very long data records (else the estimates of the synchronization parameters cannot be perfect). It also neglects channel effects.
Consequences of Stationarization
When we adopt a phase-randomized random-process model, what we get is simplicity of mathematical analysis–probabilistic parameters like the autocorrelation are time-invariant and so are that much easier to calculate. What we give up is realism. The key important point is that we have stationarized the process, not the sample paths of the process. In other words, we’ve ensured we don’t have ergodicity, which is a property of a random process that says that an average over the ensemble is equal to an average over infinite time for almost every sample path. If we construct estimators using the stationarized random process, and we have appropriate ergodicity, we can apply the estimator to a sample path (the actual simulated or captured signal) and expect good results. If we don’t have ergodicity, then we may not get the expected or good results, and we might then embark on a program to, for instance, robustify the algorithm against nuisance parameters like carrier frequency offset.
Each of the sample paths of (65) is like a single received example of the process–it has a fixed-but-unknown symbol-clock phase , a fixed-but-unknown carrier frequency offset
, and a fixed-but-unknown carrier phase
. It is a cyclostationary signal with properties (for example, cycle frequencies) that depend on the nature of the symbol variables
and the pulse function
.
When we do our work in the random-process domain, using a model that contains phase-randomizing variables, we see that there aren’t any cyclic cumulants. The only cumulants are ‘stationary-signal cumulants,’ which involve combining lower-order moments in the usual way, but there aren’t any cycle frequencies except zero. Then we switch to Monte Carlo simulations or processing a captured signal, and each time we process, there are cycle frequencies, there are cyclic cumulants, there is time-variation to moments and cumulants. The question is: What happens to those stationary-signal parameters when we apply them to a cyclostationary signal? I attempt to shed some light on this question in the remainder of this post.
Interlude: The Case of OOK Versus BPSK
Thinking about what the stationary-signal cumulants mean when applied to a cyclostationary input is rather difficult when the cumulant orders involved are larger than two (at least it is for me). So to ease into the discussion about higher-order stationary-signal cumulants versus higher-order cyclic cumulants, let’s look at an example that only involves first- and second-order cumulants: BPSK vs OOK.
OOK and BPSK are PAM signals like (9) and (65), and if you keep the pulse function the same between them, the only difference is in the values of the symbols
. The finite set of distinct complex numbers (amplitude and phase of the value multiplying a pulse) is called the constellation. BPSK has a constellation of
whereas OOK has a constellation of
. This means OOK is BPSK plus a constant. The implication is that OOK is BPSK with a finite-strength additive sine-wave component with frequency equal to the carrier frequency. Mathematically,
where and
. This means that for pulse functions like rectangular, square-root raised-cosine, and raised-cosine, the difference between OOK and BPSK is that OOK has a PSD with an impulse at the carrier
and BPSK does not.
We are going to look at cyclic cumulants that use the correct lower-order cycle frequencies and those that don’t to illustrate the kind of thing that can happen when one applies stationary-signal cumulants to cyclostationary signals. When we ignore the first-order cycle frequency that OOK possesses (), and compute the cumulant for cycle frequency of zero, we’ll see that the stationary-signal cumulants diverge from the cyclic cumulants–only one of them is a true cumulant. The more common and important case of non-OOK signals and larger values of
is more subtle, but we’ll attempt a similar illustration at the close of the post.
The average power of the OOK signal is made twice that of the BPSK signal so that if the OOK sine-wave component were removed from the signal prior to any kind of estimation, the signal would appear statistically identical to a BPSK signal. This power boost makes the plots a bit easier to interpret–when the OOK and BPSK parameters match, the tone has no effect, or else it was properly taken care of by inclusion of the first-order cycle frequency in the cyclic-cumulant formula.
First, let’s look at the PSDs of the OOK and BPSK signals to fix the ideas here. Figure 16 shows PSD estimates for the two generated signals. The frequency-smoothing method (FSM) is used to estimate these PSDs, so the OOK tone appears as a rectangle with width equal to the FSM smoothing-window width I chose.

Figure 17 shows the corresponding autocorrelation functions (non-conjugate cyclic autocorrelation for ).

Apart from the additive tone at the carrier of , the PSDs are identical. That additive tone component of the PSD corresponds to the sine-wave component of the autocorrelation function for OOK in Figure 17 (the OOK autocorrelation never decays to zero.) The symbol rate is
and the pulse function is square-root raised-cosine with roll-off factor of
.
The non-conjugate spectral correlation function for the symbol-rate cycle frequency and the conjugate spectral correlation function for the doubled-carrier cycle frequency are shown in Figures 18 and 19, respectively. Note here that we might expect that theses functions will differ between OOK and BPSK, because the spectral correlation functions don’t include any notion of lower-order combinations.


The corresponding cyclic autocorrelation functions are shown in Figures 20 and 21.


Note that the symbol-rate functions (Figures 18 and 20) for OOK and BPSK match–there is no effect of the first-order cycle frequency for this cycle-frequency. That is because there is no combination of first-order cycle frequencies that add up to the symbol rate, so the first-order statistics of the signal do not impact those estimates. On the other hand, the doubled-carrier functions are strongly impacted by the presence of the first-order sine wave. That is because each term in the delay product
contains a sine-wave component with frequency
, and these multiply to contribute to the overall statistic for
. (A real-world example of this phenomenon is the case of ATSC-DTV, which has a pilot tone and two strong conjugate features–the doubled tone frequency ends up contributing to one of those conjugate features.)
Now let’s look at the true cyclic cumulants. These cumulants take into account the first-order cycle frequency of OOK (). Figures 22-24 show the true cyclic cumulants for both signals–the BPSK and OOK estimates should match every time.



Next let’s look at the cyclic cumulants we obtain when we ignore the first-order cycle frequency. The three sets of cumulants are shown in Figures 25-27.



As expected, the cumulants diverge for and
.
This means that even if you focus exclusively on cycle frequencies equal to zero (‘stationary-signal moments and cumulants’), the ignored-but-still-there lower-order cyclostationarity can affect your result. This is the problem with modeling things as stationary random processes (likely with the aid of phase-randomizing random variables) and then applying stationary-signal moments and cumulants to sample paths of those processes, which are not stationary.
Let’s look at this phenomenon in a little more detail before wrapping this post up.
Applying Stationary-Signal Probabilistic-Parameter Definitions to Cyclostationary Signals
So here is where it all pays off. What happens when you assume you have a stationary signal, or you’ve actively tried to create a stationary process, and you then apply stationary-signal moment and cumulant estimators to the signal, but it is actually cyclostationary?
Let’s consider a BPSK signal with square-root raised-cosine pulses with roll-off factor of one, symbol rate of , and various carrier frequency offsets ranging from zero to a moderate fraction of the symbol rate such as
. The signal has unit power and noise with power
is added for realism.
We compute and plot the true cyclic cumulants and the stationary-signal cumulants side-by-side. The true cyclic cumulants are simply cyclic cumulants that employ the correct lower-order cycle frequencies in the required combinations of lower-order cyclic moments. The stationary-signal cumulants are obtained by following the moment-to-cumulant formula, but the only cycle frequency used for any combination of order and number of conjugations
is zero. The true cyclic cumulants and the stationary-signal cumulants will match for a stationary ergodic random process, otherwise they will diverge.
First, consider . That is, order
, number of conjugated factors
, and harmonic number
. The harmonic number
will always be zero in these measurements because we are comparing stationary-signal cumulants (always a cycle frequency of zero) with true cyclic cumulants (can correspond to other cycle frequencies too).

The correct peak of the true cyclic cumulants in Figure 28 is , and we see that the true cyclic cumulants are correct for all values of the CFO–they are using the doubled CFO as the cycle frequency. So when the actual CFO is zero, the true and stationary cumulants match, as expected. When the actual CFO is not zero, the
stationary-signal cumulant is not equal to the cumulant when the CFO is zero. There is no
cumulant with cycle frequency zero for those cases. The
cumulant is useless unless either (1) you know the CFO, or (2) the CFO is zero.
For the cumulants, we obtain the estimates in Figure 29.

In Figure 29, the stationary-signal and true cyclic cumulants match independent of the actual value of the carrier frequency offset. The value of the offset is irrelevant to these cumulants, so lack of knowledge of the offset doesn’t affect the computations.
Next, let’s look at in Figure 30.

For , none of the stationary-signal cumulants match the true cyclic cumulants, and the true cyclic-cumulant magnitudes are correct since the theoretical peak for this pulse shape and modulation type is
. Here is where things get interesting. Even when the carrier offset is zero, the stationary-signal cumulant does not match the true cyclic cumulant because the appropriate lower-order cyclic moments are not combined in the moment-to-cumulant formula. This didn’t happen in the second-order case
because there isn’t any first-order cyclostationarity to account for (that’s why I did the OOK-BPSK Interlude above, where there was lower-order (first-order) cyclostationarity).
Even worse are the cases in Figure 30 where the offset is non-zero. In those cases the measured stationary-signal cumulants are near zero because there is no zero-valued cycle frequency for for those signals, and the stationary-signal cumulants pretend that there is.
The cumulants suffer as well because even though the cycle frequencies for BPSK and
are just harmonics of the symbol rate–no offset enters–the lower-order cyclic moments that are called for in the moment-to-cumulant formula include cycle frequencies related to the doubled carrier offset. So, we see in Figure 31 that the
stationary-signal cumulants do not match the true cyclic cumulants.

The story is similar for . I’ll just show the results for
in Figure 32.

Discussion
In future posts I’ll be looking at some published papers on modulation recognition (both statistics-based and machine-learning-based) that use stationary-signal moments and cumulants. In this post, the idea I’ve tried to get across is that the kinds of processing you might want to apply to extract valuable information from a signal depend on the statistical nature of the data–the captured or simulated signal that you are actually processing. One can get confused and create poor feature extractors if the features are based on a random-process model that lacks ergodic properties because those properties are needed to forge a strong link between the mathematical model and the processed data record.
Another way of saying it is that if you use phase-randomization to render a random process stationary, then you will get unexpected results when you process the sample paths of that stationary process because they will be cyclostationary signals.
Thanks for getting all the way through this long post! As usual, if there are errors or if you want to make comments, leave a message in the Comments section below.