Conjugation Configurations

Using complex-valued signal representations is convenient but also has complications: You have to consider all possible choices for conjugating different factors in a moment.

When we considered complex-valued signals and second-order statistics, we ended up with two kinds of parameters: non-conjugate and conjugate. So we have the non-conjugate autocorrelation, which is the expected value of the normal second-order lag product in which only one of the factors is conjugated (consistent with the normal definition of variance for complex-valued random variables),

\displaystyle R_x(t, \boldsymbol{\tau}) = E \left[ x(t+\tau_1)x^*(t+\tau_2) \right] \hfill (1)

and the conjugate autocorrelation, which is the expected value of the second-order lag product in which neither factor is conjugated

\displaystyle R_{x^*}(t, \boldsymbol{\tau}) = E \left[ x(t+\tau_1)x(t+\tau_2) \right]. \hfill (2)

The complex-valued Fourier-series amplitudes of these functions of time t are the non-conjugate and conjugate cyclic autocorrelation functions, respectively.

The Fourier transforms of the non-conjugate and conjugate cyclic autocorrelation functions are the non-conjugate and conjugate spectral correlation functions, respectively.

I never explained the fundamental reason why both the non-conjugate and conjugate functions are needed. In this post, I rectify that omission. The reason for the many different choices of conjugated factors in higher-order cyclic moments and cumulants is also provided. These choices of conjugation configurations, or conjugation patterns, also appear in the more conventional theory of higher-order statistics as applied to stationary signals.

The reason we need to consider various numbers of conjugated and non-conjugated terms in our moments and cumulants is that we desire to represent all of our signals and data as complex-valued random processes (complex signals). I’ll repeat this later, but a powerful reason to use complex-valued discrete-time processes is that they can represent real sampled RF signals using a sampling rate that is independent of the carrier frequency of the signal. Only the occupied bandwidth of the RF signal matters when representing an RF signal in terms of a lowpass (frequency content [Fourier transform, PSD] near zero frequency) complex-valued process.

Let’s take this step by step.

Suppose we have a simple real-valued radio-frequency signal, which is just a real-valued message s(t) modulated by (multiplied by) a real-valued sine wave,

\displaystyle x(t) = s(t) \cos(2\pi f_c t + \phi). \hfill (3)

(This is usually referred to as an amplitude-modulated (AM) signal.) The message signal s(t) is real, and so it has a symmetric PSD. Here is a numerical example:

psd_s_ann
Figure 1. Power spectral density (PSD) for a real-valued message signal. This is the signal that we want to convey to a distant receiver using an appropriately chosen method of imprinting it on a radio-frequency sine wave.

I’ve denoted the width of the PSD for s(t) as B. Throughout this post we assume that the carrier frequency f_c is much greater than B, so that the symmetric PSD of the real signal x(t) contains two well-separated bumps:

psd_x_ann
Figure 2. The power spectral density for the amplitude-modulated signal in (3), which multiplies our real-valued message s(t) by a real-valued sine wave.

This means that each bump could be completely separated from the other using linear time-invariant filters. Now let’s look at the real-valued signal in more detail, with an eye toward expressing it in terms of simpler complex-valued components.

Recall Euler’s Formula is given by

\displaystyle e^{i\theta} = \cos(\theta) + i\sin(\theta), \hfill (4)

which implies

\displaystyle e^{i\theta} + e^{-i\theta} = \cos(\theta) + \cos(-\theta) + i\sin(\theta) + i\sin(-\theta). \hfill (5)

Now, \cos(\theta) is an even function and \sin(\theta) is and odd function, so that

\displaystyle e^{i\theta} + e^{-i\theta} = 2\cos(\theta), \hfill (6)

or

\displaystyle \cos(\theta) = \displaystyle \frac{1}{2} \left( e^{i\theta} + e^{-i\theta} \right). \hfill (7)

Using this in our real-valued radio-frequency signal gives us

 \displaystyle x(t) =  \frac{s(t)}{2} e^{i2\pi f_c t + i\phi} + \frac{s(t)}{2} e^{-i2\pi f_c t - i\phi}. \hfill (8)

\displaystyle = x_+(t) + x_{-}(t). \hfill (9)

Using elementary Fourier transform analysis, the signal x_+(t) corresponds to the positive-frequency bump in the PSD of x(t) and x_{-}(t) corresponds to the negative-frequency bump. So suppose we have just one or the other of x_+(t) and x_{-}(t). Can we recover x(t)? Sure, recalling that s(t) is real here, we can just take the real part of x_+(t),

\displaystyle x(t) = 2 \Re \left[x_+(t)\right], \hfill (10)

where \Re[\cdot] is an operator that returns the real part of its argument. This must mean that all the statistical information in x(t) is available in x_+(t). An advantage of working with signals like x_+(t) is that they can be frequency shifted to zero frequency, then sampled at a rate equal to B (using the basic sampling theorem). The basic sampling rate for x(t) is 2(f_c + B/2), which can be much much greater than B. (See also the Signal Processing ToolKit post on analytic signals and the complex envelope.)

Now, let’s look at the statistics of x(t). In particular, let’s look at the expected value of the second-order lag product, which is the autocorrelation function,

\displaystyle E[x(t+\tau_1)x^*(t+\tau_2)] = E[x(t+\tau_1)x(t+\tau_2)] = R_x(t, \boldsymbol{\tau}). \hfill (11)

We can express this expected value in terms of x_+(t) and x_{-}(t),

\displaystyle R_x(t, \boldsymbol{\tau}) = E\left[ (x_+(t+\tau_1) + x_{-}(t+\tau_1))(x_+(t+\tau_2) + x_{-}(t+\tau_2)) \right]. \hfill (12)

\displaystyle R_x(t, \boldsymbol{\tau}) = \displaystyle \frac{1}{4} E \left[ s(t+\tau_1)s(t+\tau_2)e^{i2\pi f_c (2t + \tau_1 + \tau_2) + i2\phi} + s(t+\tau_1)s(t+\tau_2) e^{i2\pi f_c (\tau_1 - \tau_2)} \right.

\displaystyle \left. + s(t+\tau_1)s(t+\tau_2)e^{i2\pi f_c(\tau_2 - \tau_1)} + s(t+\tau_1)s(t+\tau_2) e^{-i2\pi f_c (2t + \tau_1 + \tau_2) -i2\phi} \right]. \hfill (13)

Or, in terms of the autocorrelation for the message signal s(t),

\displaystyle R_x(t, \boldsymbol{\tau}) = \frac{1}{4} R_s(t, \boldsymbol{\tau}) \left[ e^{i2\pi f_c(2t + \tau_1 + \tau_2) + i2\phi} + e^{i2\pi f_c(\tau_1 - \tau_2)} + e^{i2\pi f_c (\tau_2 - \tau_1)} + e^{-i2\pi f_c (2t + \tau_1 + \tau_2)} \right]. \hfill (14)

Now, if s(t) is cyclostationary (second-order) with cycle frequencies \alpha_s, then x(t) will have cycle frequencies

Middle two terms:

\alpha_s

First term:

\alpha_s + 2f_c

Last term:

\alpha_s - 2f_c

For example, if the real signal s(t) is a pulse-amplitude modulated (PAM) signal, it will have cycle frequencies \alpha_s \in \{k/T_0\}, where 1/T_0 is the symbol rate of the PAM signal. So the middle two terms above give us those cycle frequencies, the first term gives us 2f_c + k/T_0 (which includes 2f_c itself when k=0), and the last term gives us -2f_c + k/T_0. BPSK is such a PAM signal.

Quadrature Modulated Signals

The more general case is defined by adding signals in phase quadrature,

\displaystyle y(t) = s_I(t) \cos(2\pi f_c t + \phi) + s_Q(t) \sin(2\pi f_c t + \phi), \hfill (15)

where s_I(t) is the inphase component and s_Q(t) is the quadrature component of y(t). Again, through the use of Euler’s Formula, we can express this real-valued radio-frequency signal as

\displaystyle y(t) = \frac{(s_I(t) - is_Q(t))}{2} e^{i2\pi f_c t + i\phi} + \frac{(s_I(t) + is_Q(t))}{2} e^{-i2\pi f_c t - i\phi}

\displaystyle = z(t) e^{i2\pi f_c t + i\phi} + z^*(t) e^{-i2\pi f_c t - i\phi}.  \hfill (16)

As before, we can represent the real signal y(t) by the complex envelope signal z(t) provided the carrier f_c is much larger than the bandwidths of the inphase and quadrature components. If we want to examine the complete picture of the second-order statistics of y(t), we can do that by looking at the lag product

\displaystyle y(t+\tau_1)y(t+\tau_2) = \left( z(t+\tau_1)e^{i2\pi f_c (t+ \tau_1) + i\phi} + z^*(t+\tau_1) e^{-i2\pi f_c (t+\tau_1) - i\phi} \right)

\displaystyle \times \left( z(t+\tau_2)e^{i2\pi f_c (t+\tau_2) + i\phi} + z^*(t+\tau_2) e^{-i2\pi f_c (t+\tau_2) - i\phi} \right). \hfill (17)

By multiplying the two terms, we find additive terms corresponding to all of the four possible conjugation configurations:

\displaystyle z(t+\tau_1)z(t+\tau_2) e^{(\cdot)}

\displaystyle z(t+\tau_1)z^*(t+\tau_2) e^{(\cdot)}

\displaystyle z^*(t+\tau_1)z(t+\tau_2) e^{(\cdot)}

\displaystyle z^*(t+\tau_1)z^*(t+\tau_2) e^{(\cdot)}.

So to obtain all the statistical information for the original signal y(t), we need to obtain the statistical information from each of the distinct conjugation configurations corresponding to the complex signal z(t).

Now, some of the configurations are redundant. For instance, the lag product z(t+\tau_1)z(t+\tau_2) is just the complex conjugate of the lag product z^*(t+\tau_1)z^*(t+\tau_2), so you can determine everything about one of these lag products from the other.

This means that in the end, for second-order, we always need to consider the “no conjugations” case z(t+\tau_1)z(t+\tau_2) as well as the “one conjugation” case z(t+\tau_1)z^*(t+\tau_2), which leads us back to the conjugate and non-conjugate cyclic autocorrelation and spectral correlation functions, respectively.

Examples

Many signals possess significant features in both the non-conjugate and conjugate spectral correlation planes. By “significant” I mean that the energy in the spectral correlation function for a particular cycle frequency is of the same order of magnitude as the signal power, or perhaps one order of magnitude less, but no smaller. Examples include the canonical rectangular-pulse BPSK signal, OOK, DSSS BPSK, DSSS SQPSK, FSK, and GFSK. Many others possess only non-conjugate features, such as all QAM/PSK with greater than two points in the constellation. A few possess only conjugate features, such as SQPSK with square-root raised-cosine pulses, AM, and GMSK.

nth-Order Moments and Cumulants

When we look at nth-order lag products, as we did in the posts on higher-order moments and cumulants, various conjugation configurations come out of the complex-valued signal representation. In general, they are all needed to determine the full suite of cycle frequencies, cyclic moments, and cyclic cumulants for the original real-valued signal. Symmetries again apply, however, so that we can capture the statistical information by using a minimum set of configurations. When all the delays \tau_j are equal, this becomes particularly easy. For example, for n=4, we may consider only the cases of no conjugations, one conjugation, and two conjugations. The case of three conjugations is covered by the case of one conjugation, and the case of four is covered by the case of none.

Typical Complex-Signal Model

The typical model for a complex-valued communication signal is

\displaystyle x(t) = As(t) e^{i2\pi f_0 t + i \phi_0}, \hfill (18)

where s(t) is the complex envelope of the transmitted signal, f_0 is called the carrier offset frequency, A is some amplitude factor, and \phi_0 is the residual carrier phase.

The carrier offset frequency f_0 is the result of imperfectly downconverting (frequency shifting) the RF signal to zero frequency. Typically, f_0 is small compared to the bandwidth of s(t). To understand the statistics of x(t), then, it is required to look at the moments and cumulants of x(t) with all possible distinct conjugation configurations.

The next step in such modeling is to add noise, interference, and a propagation channel (such as discrete multipath).

After that, transmitter and receiver impairments can be considered, but we’re getting ahead of ourselves…

And so that’s it. The various conjugation configurations are required to fully study the statistical structure of communication signals that have been (perhaps imperfectly) converted to complex baseband (zero frequency). And it is desired to work with complex baseband signals because they require small sampling rates compared to the RF or even the intermediate-frequency (IF) signals. In the end, we can blame the existence of the various conjugation configurations on the desire to work with complex numbers.

Author: Chad Spooner

I'm a signal processing researcher specializing in cyclostationary signal processing (CSP) for communication signals. I hope to use this blog to help others with their cyclo-projects and to learn more about how CSP is being used and extended worldwide.

8 thoughts on “Conjugation Configurations”

  1. first i would like to thank you for this much interesstant blog,
    i have two ambiguities if possible to clarify:
    – to calculate the cyclic autocorrelation function do we need both conjugate and non-conjugate terms of it to get a final result or just one is enough?
    – i tried to calculate the folowing function for the code you provided of bpsk signal but get all ones result: f=y(t)*y(t+tau)

    1. to calculate the cyclic autocorrelation function do we need both conjugate and non-conjugate terms of it to get a final result or just one is enough?

      To calculate the “non-conjugate cyclic autocorrelation”, you need to have one factor conjugated, as in x(t)x^*(t+tau). To calculate the “conjugate cyclic autocorrelation” you need to have no conjugated factors, as in x(t)x(t+tau). To find the cyclic autocorrelation for the corresponding real-valued signal (x(t) is complex valued), you’ll need both the non-conjugate and conjugate cyclic autocorrelation functions, but most people don’t want to consider the real-valued signal.

      i tried to calculate the folowing function for the code you provided of bpsk signal but get all ones result: f=y(t)*y(t+tau)

      The code I posted at https://cyclostationary.files.wordpress.com/2015/03/make_rect_bpsk.doc produces the BPSK signal in a MATLAB variable named y_of_t. Is that what you mean by “y(t)”? y_of_t contains the rectangular-pulse BPSK signal, shifted by 0.05 Hz, and added to a small amount of noise. It is a vector, and so y_of_t*y_of_t will cause a MATLAB multiplication error. Let me know more details about what you tried…

      1. i considered your code as a example but change some parameters, and then i calculated the CAF (cyclic autocorrelation function) using the estimator of dantadawté and giannakis and got the figure attached.
        please, i need to know what is the relationship between conjugate and non-conjugate cyclic autocorrelation for real valued signals?
        for the product f=y(t)*y(t+tau), i did an Element-wise multiplication as the following:

        function [ ftau ] = fun_tau( y,tau )
        %UNTITLED Summary of this function goes here
        % Detailed explanation goes here

        N=length(y);

        ftau=zeros(length(tau),N);
        ind_tau2=1:length(tau);
        ind_tau2=ind_tau2-ind_tau2(length(tau)/2);
        indt=0;

        % negative lag values
        for ind_tau=ind_tau2(1:(length(tau)/2-1))
        indt=indt+1
        ftau(indt,1:N+ind_tau)=y(1:N+ind_tau).*(y(1-ind_tau:N));
        end
        %%positive lag values
        for ind_tau=ind_tau2(length(tau)/2:length(tau))
        indt=indt+1
        ftau(indt,1:N-ind_tau)=y(1:N-ind_tau).*(y(1+ind_tau:N));
        end
        end

        1. please, i need to know what is the relationship between conjugate and non-conjugate cyclic autocorrelation for real valued signals?

          For a real-valued signal x(t), the non-conjugate and conjugate cyclic autocorrelation functions are identical.
          The non-conjugate and conjugate spectral correlation functions are identical too, of course.

          Regarding your function fun_tau(), and your earlier question about “get all ones”, if you modified my code so that the signal y is the noise-free rectangular-pulse BPSK signal, with no carrier shift applied, then for some values of tau you should get all ones. So I still need to know what y and tau are in the call to fun_tau().

          Also, I don’t see the figure. Did you try to post it along with the comment?

          Thanks for your patience!

  2. my y is the noise free rect_pulse BPSK with the following modification :
    carr=exp(sqrt(-1)*2*pi*f0*t);
    and
    t=0:T_bit*num_bits-1;
    t=t*Te; % Te is the sampling period wich was not figuring in the code you provide
    and for the set of lag : tau=-10*Te:Te:10*Te;

    PS: how can I past a figure in the comment?

    learning needs to much patience, so don’t even worry 🙂

    1. What are the values of Te and f0?

      In my posted code, Te=1. You don’t need to carry the variable Te around when doing these kinds of calculations. It is sufficient to do all estimations using Te=1. Then, if you want to graph, say, |S_x^a(f)| versus f, you can simply scale the normalized-frequency vector f_norm by 1/Te.

      Suppose we used N samples to estimate the PSD S_x^0(f). Then our normalized-frequency vector is [-0.5:(1/N):0.5). Our physical-frequency vector is then [-0.5:(1/N):0.5)*(1/Te).

      You don’t want to use my code to produce the baseband BPSK signal with Te=1 and then mix that with other signals (like the carrier) which are formed with Te =/= 1.

      I see that WordPress allows one to insert a URL that points to an image or video or whatever using the ‘img’ button in the row of buttons above the comment window. But I determined you can’t just paste an image into a comment.

    1. Equation (1) of the post you linked to is the non-conjugate cyclic periodogram, which is often called just the cyclic periodogram. Equation (2) of the post is the non-conjugate spectral correlation function, which is a limiting form of the averaged non-conjugate cyclic periodogram.

      The conjugate cyclic periodogram is (8) and the conjugate spectral correlation function is (7).

Leave a Comment, Ask a Question, or Point out an Error

Discover more from Cyclostationary Signal Processing

Subscribe now to keep reading and get access to the full archive.

Continue reading