SPTK: I and Q

Where does IQ (or I/Q) data come from?

Previous SPTK Post: Digital Filters Next SPTK Post: The Characteristic Function

Let’s really get into the mathematical details of “IQ data,” a phrase that appears in many CSP Blog posts and an awful lot of machine-learning papers on modulation recognition. Just what are “I” and “Q” anyway?

Jump straight to the Significance of IQ Data in CSP

Bandpass Signals and Their Complex Representation

To set the stage, we review the idea of a bandpass signal which, in the context of manmade radio-frequency signals, means modulating a lowpass message or sensing signal. ‘Modulating’ here simply means multiplying by a sine wave, because we know from Fourier analysis that multiplying a signal x(t) by a sine wave (real-valued or complex-valued) results in a signal y(t) whose Fourier transform is a frequency-shifted version of x(t).

Suppose we have some real-valued message signal, such as a voltage that is proportional to a music or speech (acoustic) signal, our x(t) and we want to transmit those samples using a radio-frequency signal near f_c Hertz. We can do that by modulating x(t) to create the transmitted signal y(t),

\displaystyle y(t) = x(t) \cos(2\pi f_c t + \theta), \ \ \ f_c > 0. \hfill (1)

If \displaystyle X(f) \Longleftrightarrow x(t), meaning X(f) is the Fourier transform of x(t), then we can use our Fourier-transform knowledge to obtain an expression for Y(f) \Longleftrightarrow y(t),

\displaystyle Y(f) = \frac{e^{i\theta}}{2} X(f-f_c) + \frac{e^{-i\theta}}{2} X(f + f_c). \hfill (2)

This frequency-translation process is illustrated in Figure 1.

Figure 1. Illustration of the frequency-translation process in which a baseband (lowpass) signal is transformed into an RF (bandpass) signal by multiplication with a sine-wave carrier.

Why would we want to move the signal’s transform from near zero frequency to some much higher frequency f_c? The reason is radio-wave propagation. Electromagnetic waves will propagate through different media–air, water, free space, the Earth, a metal conductor, etc.–differently. Some media may absorb much of the wave’s energy over short wave-traveling distances, meaning that the wave must have very high power (relative to what we can generate using our electronics) to permit traveling the required distance.

The propagation distance for a given power level and propagation media depends on the frequency of the wave. Therefore, if we wish to propagate our information across time and space to our receiver, we need to select a frequency that permits low-loss propagation through the chosen media for the distances of interest, which may vary from a few meters for a personal-area network to many kilometers for a wide-area network to tens of thousands of kilometers for geostationary satellites. Typical media (for example, the Earth’s atmosphere or the ionosphere) will require much higher frequencies for good propagation than the native frequencies of the signal to be transmitted.

In addition to propagation power loss, the other aspects of propagation must be considered when choosing a suitable frequency band for transmission: refraction (redirection), diffraction (bending), and reflection (bouncing).

So the dilemma in RF transmission and reception is that we are forced to use high frequencies for transmission, but our messages are inherently low frequency. We need mathematical tools to understand the relationships between the original (“baseband”) signal and the transmitted (“RF”) signal. That is where IQ comes in.

Applying an arbitrary linear time-invariant filter to the transmitted (RF) signal leads to a distortion of the spectral shape seen for the baseband signal, as illustrated in Figure 2. We now have a bandpass real-valued signal where the PSD for positive frequencies is no longer necessarily symmetrical around f_c.

Figure 2. Illustration of the effects of an arbitrary linear time-invariant filter (propagation channel) on a bandpass (radio-frequency) bandlimited signal.

Our general interest here, then, is in signals like z(t) in Figure 2, where the power of the signal is concentrated around the carrier frequency f_c, which is very far from zero relative to the bandwidth of the signal, f_c \gg 2W.

Inphase and Quadrature Components

Suppose we have some bandpass signal v_{bp}(t) with Fourier transform V_{bp}(f) shown in Figure 3. The function (signal) v_{bp}(t) is a model for the actual transmitted waveform–no complex numbers are involved. We know that this signal is a real-valued sine wave with time-varying amplitude and/or phase,

\displaystyle v_{bp}(t) = A(t) \cos(2 \pi f_c t + \phi(t)). \hfill (3)

Figure 3. An arbitrary radio-frequency (bandpass) signal’s transform.

We will eventually want to sample such signals and manipulate their mathematical expressions as they pass through various systems such as filters. From the basic results of the sampling theorem, we’d have to sample such signals at a rate at a minimum rate of twice the largest frequency component of the signal, or f_s > 2(f_c+W), which could be a very large number indeed if the carrier frequency is something like 2.5 GHz (the WiFi/Bluetooth ISM band).

Can we represent v_{bp}(t) in a more convenient way? Let’s take a look at its structure.

First, we use a trigonometric identity to reexpress the compact (3) using two sine waves instead of one,

\cos(A+B) = \cos(A) \cos(B) - \sin(A)\sin(B), \hfill (4)

which leads to

\displaystyle v_{bp}(t) = A(t) \left[ \cos(2\pi f_c t) \cos(\phi(t)) -\sin(2\pi f_c t) \sin(\phi(t)) \right]. \hfill (5)

We see that the bandpass signal is the sum of two real-valued sine waves, \sin(2\pi f_c t) and \cos(2\pi f_c t), with time-varying amplitudes. Two sine waves are said to be in quadrature if their phases differ by ninety degrees (\pi/2 radians), such as the two involved sine waves here. This leads to identification of the in-phase and quadrature components of the bandpass signal,

\displaystyle v_{bp}(t) = \underbrace{\left[A(t) \cos(\phi(t))\right]}_{\stackrel{\mbox{\rm"in-phase}}{\mbox{\rm component"} v_r(t)}} \cos(2\pi f_c t) - \underbrace{\left[A(t) \sin(\phi(t))\right]}_{\stackrel{\mbox{\rm "quadrature}}{\mbox{\rm component"} v_i(t)}} \sin(2\pi f_c t). \hfill (6)

The in-phase component is often denoted simply by “I” and the quadrature component by “Q.” So now you know the origin of “IQ data.” (I’m using v_r(t) instead of v_i(t) because I’m using i to mean the square root of negative one, as usual on the CSP Blog.)

The Complex Envelope

Let’s now take a look at V_{bp}(f) by taking the Fourier transform of our IQ representation (6) (and here is where lots of our SPTK tools pay off),

\displaystyle V_{bp}(f) = {\cal {F}} \left[ A(t)\cos(\phi(t))\cos(2\pi f_c t) - A(t) \sin(\phi(t)) \sin(2\pi f_c t) \right] \hfill (7)

\displaystyle = {\cal{F}} \left[ v_r(t) \cos(2\pi f_c t) - v_q(t) \sin(2\pi f_c t) \right] \hfill (8)

\displaystyle = V_r(f) \otimes \frac{1}{2} \left[ \delta(f-f_c) + \delta(f+f_c) \right] - V_q(f) \otimes \frac{1}{2i} \left[ \delta(f-f_c) - \delta(f+f_c) \right] \hfill (9)

\displaystyle = \frac{1}{2} \left[V_r(f-f_c) + V_r(f+f_c) \right] + \frac{i}{2} \left[ V_q(f-f_c) - V_q(f+f_c) \right], \hfill (10)

where

\displaystyle V_r(f) \Longleftrightarrow v_r(t) = A(t) \cos(\phi(t)), \hfill (11)

\displaystyle V_q(f) \Longleftrightarrow v_q(t) = A(t) \sin(\phi(t)). \hfill (12)

Grouping terms that are spectrally similar we obtain the alternate expression for the bandpass-signal transform given by

\displaystyle V_{bp}(f) = \frac{1}{2} \left[ V_r(f-f_c) + i V_q(f-f_c)\right] + \frac{1}{2} \left[ V_r(f+f_c) - i V_q(f+f_c)\right], \hfill (13)

which is illustrated in Figure 4.

Figure 4. Schematic illustration of the decomposition (13) of a real-valued bandpass signal into frequency-shifted and scaled versions of two lowpass signals: the in-phase and quadrature components.

If we know V_r(f) and V_q(f), we can construct the real-valued signal v_{bp}(t) from V_{bp}(f). In this sense, the set \{v_r(t), v_q(t)\} is a representation of v_{bp}(t) (of course, we need to know f_c too).

It must be the case that the positive-frequency portion of V_{bp}(f) is equal to the sum of the in-phase and quadrature transforms,

\displaystyle V_{bp}(f) u(f) = \frac{1}{2} \left[ V_r(f-f_c) + i V_q(f-f_c) \right], \hfill (14)

and

\displaystyle V_{bp}(f) u(-f) = \frac{1}{2} \left[ V_r(f+f_c) - i V_q(f+f_c) \right]. \hfill (15)

But we can get the negative frequency portion of the signal (15) from the positive-frequency portion (14). Consider the complex-envelope signal

\displaystyle V_{lp}(f) = \frac{1}{2} \left[ V_r(f) + i V_q(f)\right]. \hfill (16)

Recalling the definitions of v_r(t) and v_q(t),

\displaystyle v_r(t) = A(t) \cos(\phi(t)) \hfill (17)

\displaystyle v_q(t) = A(t) \sin(\phi(t)), \hfill (18)

and noting that the inverse Fourier transform of V_{lp}(f) is given by

\displaystyle {\cal{F}}^{-1} \left[ V_{lp}(f) \right] = v_{lp}(t) = \frac{1}{2} \left[ v_r(t) + iv_q(t)\right], \hfill (19)

we have

\displaystyle v_{lp}(t) = \frac{1}{2}A(t) \underbrace{\left[ \cos(\phi(t)) + i \sin(\phi(t)) \right]}_{e^{i \phi(t)}}, \hfill (20)

or

\displaystyle v_{lp}(t) = \frac{1}{2} A(t) e^{i \phi(t)}. \hfill (21)

Looking back at the bandpass signal v_{bp}(t), we can see the relationship between that signal and the complex envelope,

\displaystyle v_{bp}(t) = A(t) \cos(2\pi f_c t + \phi(t)) \hfill (22)

\displaystyle = {\mbox{\rm Re}} \left[ A(t) e^{i2\pi f_c t}e^{i\phi(t)} \right] \hfill (23)

\displaystyle \Longrightarrow v_{bp}(t) = 2 {\mbox{\rm Re}} \left[ v_{lp}(t) e^{i 2 \pi f_c t} \right]. \hfill (24)

So the real-valued (actual) RF signal v_{bp}(t) is represented by a fictitious complex-valued baseband signal v_{lp}(t) multiplied by a complex-valued sine wave.

The Fourier relationships between the complex envelope and RF signal are also of interest in the context of computing the complex envelope from a real signal,

\displaystyle {\cal{F}} \left[ v_{bp}(t) \right] = V_{bp}(f) = {\cal{F}} \left[ 2 {\mbox{\rm Re}} \left(v_{lp}(t) e^{i 2 \pi f_c t} \right) \right] \hfill (25)

\displaystyle = 2 {\cal{F}} \left[\frac{1}{2} \left( v_{lp}(t)e^{i2\pi f_c t} + v_{lp}^*(t) e^{-i 2\pi f_c t}\right) \right] \hfill (26)

\displaystyle = {\cal{F}} \left[ v_{lp}(t)e^{i2\pi f_c t} + v_{lp}^*(t) e^{-i2\pi f_c t} \right] \hfill (27)

\displaystyle = {\cal{F}} \left[v_{lp}(t)e^{i2 \pi f_c t} \right] + {\cal{F}}\left[v_{lp}^*(t) e^{-i2\pi f_c t} \right]. \hfill (28)

Now, our previous work on the Fourier transform and frequency shifting (modulation) leads to the transform pair

\displaystyle v_{lp}(t)e^{i 2 \pi f_c t} \Longleftrightarrow V_{lp}(f-f_c). \hfill (29)

But what is the transform of \displaystyle v_{lp}^*(t)e^{-i2\pi f_c t}? Let’s take it step by step,

\displaystyle {\cal{F}} \left[ v_{lp}^*(t)e^{-i2\pi f_c t}\right] = \int_{-\infty}^\infty v_{lp}^*(t) e^{-i 2 \pi f_c t} e^{-i 2 \pi f t} \, dt \hfill (30)

\displaystyle = \left[ \int_{-\infty}^\infty v_{lp}(t) e^{i2\pi (f+f_c)t} \, dt \right]^* \hfill (31)

\displaystyle = \left[ v_{lp}(t) e^{-i2\pi (-f - f_c)t} \, dt \right]^* \hfill(32)

\displaystyle = V_{lp}^*(-f-f_c). \hfill (33)

Putting it all together, we have the transform of the bandpass signal in terms of transforms of the complex envelope,

\displaystyle V_{bp}(f) = V_{lp}(f-f_c) + V_{lp}^*(-f-f_c). \hfill (34)

The real-valued bandpass (RF) signal, the analytic signal, and the complex envelope are illustrated in Figure 5.

Figure 5. The RF, analytic, and complex-envelope signals in the frequency domain.

Working with the Complex Envelope

The complex envelope is convenient to work with because it has low bandwidth relative to the bandpass (RF) signal, and so we can sample it at a much lower sample rate and still preserve all the signal’s information. The downside is that now we have to work with complex signals and complex numbers.

We can always form a lowpass model of the signal (complex envelope) and a propagation channel (frequency shift it by f_c) so that the complex data we work with faithfully represents the action of the real channel on the real signal. Suppose the RF (bandpass) signal experiences a channel H_{bp}(f) (see Figure 2). Then the output of the channel is y_{bp}(t),

\displaystyle Y_{bp}(f) = X_{bp}(f) H_{bp}(f) \hfill (35)

\displaystyle Y_{lp}(f) = X_{lp}(f) H_{lp}(f),  \hfill (36)

where

\displaystyle H_{lp}(f) = H_{bp}(f+f_c)u(f+f_c). \hfill (37)

A practical circuit to extract the in-phase and quadrature components would consist of two parallel branches. The input to each branch is the RF signal voltage. The upper branch multiplies the RF signal by a sine wave \cos(2\pi f_0 t) and the lower branch multiplies the RF signal by \sin(2\pi f_0 t). It is important that these two sine waves are in quadrature–they must have the same frequency and differ in phase by \pi/2 radians or 90 degrees. The output of the upper branch is the continuous-time in-phase component I(t) and the output of the lower branch is the continuous-time quadrature component Q(t). These two continuous-time signals can then be synchronously sampled at a rate appropriate to the bandwidth of the signal (or scene).

When the frequency f_0 used in the complex-envelope extraction process is not exactly equal to the center frequency f_c of the signal, the obtained complex envelope will not be centered at zero frequency. Instead, it will be centered the difference between the two frequencies, f_c - f_0. Often this is a small number compared to the bandwidth W and it is called the carrier-frequency offset (CFO), which we have encountered many times on the CSP Blog.

Regardless of whether f_0 = f_c or not, the obtained complex-valued in-phase and quadrature data is referred to as either the complex envelope or the complex-baseband signal.

The Significance of IQ Data in CSP

In CSP, we prefer to work with the low-sampling-rate complex-baseband data because the required sampling rate is on the order of the bandwidth W rather than on the order of the carrier frequency f_c. That way, a processing data block of length N samples covers many more seconds, which means it covers many more instances of the various involved random variables that make up the signal. And all our CSP work involves the ability to average, in various ways, over those random-variable instances.

However, the choice to use complex-valued data has consequences. The main consequence is that we must use multiple versions of standard moments and cumulants that take into account the different ways one can choose to conjugate or not conjugate factors in a delay product like

\displaystyle L_x(t, \boldsymbol{\tau}; n,m) = \prod_{j=1}^n x^{(*)_j} (t + \tau_j). \hfill (38)

This is explained in detail in the post on conjugation configurations.

Previous SPTK Post: Digital Filters Next SPTK Post: The Characteristic Function

Author: Chad Spooner

I'm a signal processing researcher specializing in cyclostationary signal processing (CSP) for communication signals. I hope to use this blog to help others with their cyclo-projects and to learn more about how CSP is being used and extended worldwide.

4 thoughts on “SPTK: I and Q”

  1. Hello Chad!

    it’s been a while since we last communicated. My hardware implementation project is progressing well, and I’m considering next steps. Could you advise if the STFT algorithm has any significant applications in the field of cyclostationary analysis? I’m currently thinking about using STFT in the customozed feature extraction layer to see its effectiveness.

    Best Regards,

    1. The short-time Fourier transform (STFT) is associated with several kinds of time-frequency analysis functions, and the names of those functions seem to have been evolving and changing lately with the infusion of machine learning into RF signal analysis spaces.

      The basic STFT is a complex-valued matrix, and you can go back, losslessly, to the original time-domain data. So all magnitude and phase information relating to the signal(s) in the transformed data are preserved.

      The STFT matrix can be converted to a matrix of periodograms by computing the squared magnitude of each row and multiplying each row by the reciprocal of the transform length after that. Further, the spectrogram can then be computed from that matrix by convolving each row with a pulse-like function (e.g., a rectangle). That is the conventional (old?) use of “spectrogram”: a stacked set of power spectrum estimates (not complex-valued Fourier transforms, not the tranform magnitude, not the periodogram), where each power spectrum estimate corresponds to a different temporal window applied to the long input signal. Even wikipedia gets some of this wrong, conflating the squared-magnitude with the power spectrum (the squared magnitude isn’t even the periodogram, quite):

      In CSP, the spectrogram isn’t all that useful (that is just my opinion, not a fact), but a sequence of cyclic periodograms taken over time in a sliding-block style is intimately related to the time smoothing method of spectral correlation estimation.

      In many machine-learning papers on modulation recognition, the spectrogram is used as an input to a CNN, or the input is the magnitude of the STFT, and performance isn’t good for digital signals like QAM. But more recently, researchers are using the complex-valued STFT as a complex-valued (two-channel) image in more-or less conventional image-processing CNN structures, which is an improvement over the magnitude-only STFT and spectrogram approaches, since the signals have significant phase differences. But this approach is essentially the same as using the I/Q data itself as an input, which we know has serious performance and generalization problems when the neural network model is of the image-recognition type.

    1. Thanks Mansoor!

      The post is the basic math behind the “what” of I/Q samples. Mansoor provides a link to a cutting-edge efficient “how” to get I/Q samples. You can think of it as an alternative to the method I sketched starting just after (37), where I say “A practical circuit …”.

Leave a Comment, Ask a Question, or Point out an Error

Discover more from Cyclostationary Signal Processing

Subscribe now to keep reading and get access to the full archive.

Continue reading