CSP-Based Time-Difference-of-Arrival Estimation

Let’s discuss an application of cyclostationary signal processing (CSP): time-delay estimation. The idea is that sampled data is available from two antennas (sensors), and there is a common signal component in each data set. The signal component in one data set is the time-delayed or time-advanced version of the component in the other set. This can happen when a plane-wave radio frequency (RF) signal propagates and impinges on the two antennas. In such a case, the RF signal arrives at the sensors with a time difference proportional to the distance between the sensors along the direction of propagation, and so the time-delay estimation is also commonly referred to as time-difference-of-arrival (TDOA) estimation.

tdoa_physical_setup

Consider the diagram shown to the right. A distant transmitter emits a signal that is well-modeled as a plane wave once it reaches our two receivers. An individual wavefront of the signal arrives at the two sensors at different times.

The line segment AB is perpendicular to the direction of propagation for the RF signal. The angle \theta is called the angle of arrival (AOA). If we could estimate the AOA, we can tell the direction from which the signal arrives, which could be useful in a variety of settings. Since the triangle ABC is a right triangle, we have

\displaystyle \cos (\theta) = \frac{x}{d}. \hfill (1)

When \theta = 0, the wavefronts first strike receiver 2, then must propagate over x meters before striking receiver 1. On the other hand, when \theta = 90^\circ, each wavefront strikes the two receivers simultaneously. In the former case, the TDOA is maximum, and in the latter it is zero. The TDOA can be negative too, so that 180^\circ azimuthal degrees can be determined by estimating the TDOA.

In general, the wavefront must traverse x meters between striking receiver 2 and striking receiver 1,

\displaystyle x = d \cos(\theta). \hfill (2)

Assuming the speed of propagation is c meters/sec, the TDOA is given by

\displaystyle D = \frac{x}{c} = \frac{d\cos{\theta}}{c} \mbox{\rm \ \ seconds}. \hfill (3)

In this post I’ll review several methods of TDOA estimation, some of which employ CSP and some of which do not. We’ll see some of the advantages and disadvantages of the various classes of methods through inspection, simulation, and application to collected data.

Continue reading “CSP-Based Time-Difference-of-Arrival Estimation”

Square-Root Raised-Cosine PSK/QAM

Let’s look at a somewhat more realistic textbook signal: The PSK/QAM signal with independent and identically distributed symbols (IID) and a square-root raised-cosine (SRRC) pulse function. The SRRC pulse is used in many practical systems and in many theoretical and simulation studies. In this post, we’ll look at how the free parameter of the pulse function, called the roll-off parameter or excess bandwidth parameter, affects the power spectrum and the spectral correlation function.

Continue reading “Square-Root Raised-Cosine PSK/QAM”

CSP Estimators: The Strip Spectral Correlation Analyzer

In this post I present a very useful blind cycle-frequency estimator known in the literature as the strip spectral correlation analyzer (SSCA) (The Literature [R3-R5]). We’ve covered the basics of the frequency-smoothing method (FSM) and the time-smoothing method (TSM) of estimating the spectral correlation function (SCF) in previous posts. The TSM and FSM are efficient estimators of the SCF when it is desired to estimate it for one or a few cycle frequencies (CFs). The SSCA, on the other hand, is efficient when we want to estimate the SCF for all CFs.

See also an alternate method of exhaustive SCF estimation: The FFT Accumulation Method.

Continue reading “CSP Estimators: The Strip Spectral Correlation Analyzer”

Second-Order Estimator Verification Guide

In this post I provide some tools for the do-it-yourself CSP practitioner. One of the goals of this blog is to help new CSP researchers and students to write their own estimators and algorithms. This post contains some spectral correlation function and cyclic autocorrelation function estimates and numerically evaluated formulas that can be compared to those produced by anybody’s code.

The signal of interest is, of course, our rectangular-pulse BPSK signal with symbol rate 0.1 (normalized frequency units) and carrier offset 0.05. You can download a MATLAB script for creating such a signal here.

The formula for the SCF for a textbook BPSK signal is published in several places (The Literature [R47], My Papers [6]) and depends mainly on the Fourier transform of the pulse function used by the textbook signal.

We’ll compare the numerically evaluated spectral correlation formula with estimates produced by my version of the frequency-smoothing method (FSM). The FSM estimates and the theoretical functions are contained in a MATLAB mat file here. (I had to change the extension of the mat file from .mat to .doc to allow posting it to WordPress.) In all the results shown here and that you can download, the processed data-block length is 65536 samples and the FSM smoothing width is 0.02 Hz. A rectangular smoothing window is used. For all cycle frequencies except zero (non-conjugate), a zero-padding factor of two is used in the FSM.

For the cyclic autocorrelation, we provide estimates using two methods: inverse Fourier transformation of the spectral correlation estimate and direct averaging of the second-order lag product in the time domain.

Continue reading “Second-Order Estimator Verification Guide”

Cyclic Temporal Cumulants

In this post I continue the development of the theory of higher-order cyclostationarity (My Papers [5,6]) that I began here. It is largely taken from my doctoral work (download my dissertation here).

This is a long post. To make it worthwhile, I’ve placed some movies of cyclic-cumulant estimates at the end. Or just skip to the end now if you’re impatient!

In my work on cyclostationary signal processing (CSP), the most useful tools are those for estimating second-order statistics, such as the cyclic autocorrelation, spectral correlation function, and spectral coherence function. However, as we discussed in the post on Textbook Signals, there are some situations (perhaps only academic; see my question in the Textbook post) for which higher-order cyclostationarity is required. In particular, a probabilistic approach to blind modulation recognition for ideal (textbook) digital QAM, PSK, and CPM requires higher-order cyclostationarity because such signals have similar or identical spectral correlation functions and PSDs. (Other high-SNR non-probabilistic approaches can still work, such as blind constellation extraction.)

Recall that in the post introducing higher-order cyclostationarity, I mentioned that one encounters a bit of a puzzle when attempting to generalize experience with second-order cyclostationarity to higher orders. This is the puzzle of pure sine waves (My Papers [5]). Let’s look at pure and impure sine waves, and see how they lead to the probabilistic parameters widely known as cyclic cumulants.

Continue reading “Cyclic Temporal Cumulants”

A Gallery of Spectral Correlation

In this post I provide plots of the spectral correlation for a variety of simulated textbook signals and several collected communication signals. The plots show the variety of cycle-frequency patterns that arise from the disparate approaches to digital communication signaling. The distinguishability of these patterns, combined with the inability to distinguish based on the power spectrum, leads to a powerful set of classification (modulation recognition) features (My Papers [16, 25, 26, 28]).

In all cases, the cycle frequencies are blindly estimated by the strip spectral correlation analyzer (The Literature [R3, R4]) and the estimates used by the FSM to compute the spectral correlation function. MATLAB is then used to plot the magnitude of the spectral correlation and conjugate spectral correlation, as specified by the determined non-conjugate and conjugate cycle frequencies.

There are three categories of signal types in this gallery: textbook signals, collected signals, and feature-rich signals. The latter comprises some collected signals (e.g., LTE) and some simulated radar signals. For the first two signal categories, the three-dimensional surface plots I’ve been using will suffice for illustrating the cycle-frequency patterns and the behavior of the spectral correlation function over frequency. But for the last category, the number of cycle frequencies is so large that the three-dimensional surface is difficult to interpret–it is a visual mess. For these signals, I’ll plot the maximum spectral correlation magnitude over spectral frequency f versus the detected cycle frequency \alpha (as in this post).

A complementary gallery of cyclic autocorrelation functions can be found here.

Continue reading “A Gallery of Spectral Correlation”

Textbook Signals

What good is having a blog if you can’t offer a rant every once in a while? In this post I talk about what I call textbook signals, which are mathematical models of communication signals that are used by many researchers in statistical signal processing for communications.

We’ve already encountered, and used frequently, the most common textbook signal of all: rectangular-pulse BPSK with independent and identically distributed (IID) bits. We’ve been using this signal to illustrate the cyclostationary signal processing concepts and estimators as they have been introduced. It’s a good choice from the point of view of consistency of all the posts and it is easy to generate and to understand. However, it is not a good choice from the perspective of realism. It is rare to encounter a textbook BPSK signal in the practice of signal processing for communications.

I use the term textbook because the textbook signals can be found in standard textbooks, such as Proakis (The Literature [R44]). Textbook signals stand in opposition to signals used in the world, such as OFDM in LTE, slotted GMSK in GSM, 8PAM VSB with synchronization bits in ATSC-DTV, etc.

Typical communication signals combine a textbook signal with an access mechanism to yield the final physical-layer signal–the signal that is actually transmitted (My Papers [11], [16]). What is important for us, here on the CSP blog, is that this combination usually results in a signal with radically different cyclostationarity than the textbook component. So it is not enough to understand textbook signals’ cyclostationarity. We must also understand the cyclostationarity of the real-world signal, which may be sufficiently complex to render mathematical modeling and analysis impossible (at least for me).

Continue reading “Textbook Signals”

SCF Estimate Quality: The Resolution Product

The two non-parametric spectral-correlation estimators we’ve looked at so far–the frequency-smoothing and time-smoothing methods–require the choice of key estimator parameters. These are the total duration of the processed data block, T, and the spectral resolution F.

For the frequency-smoothing method (FSM), an FFT with length equal to the data-block length T is required, and the spectral resolution is equal to the width F of the smoothing function g(f). For the time-smoothing method (TSM), multiple FFTs with lengths T_{tsm} = T / K are required, and the frequency resolution is 1/T_{tsm} (in normalized frequency units).

The choice for the block length T is partially guided by practical concerns, such as computational cost and whether the signal is persistent or transient in nature, and partially by the desire to obtain a reliable (low-variance) spectral correlation estimate. The choice for the frequency (spectral) resolution is typically guided by the desire for a reliable estimate.

Continue reading “SCF Estimate Quality: The Resolution Product”

The Spectral Coherence Function

In this post I introduce the spectral coherence function, or just coherence. It deserves its own post because the coherence is a useful detection statistic for blindly determining significant cycle frequencies of arbitrary data records.

Let’s start with reviewing the standard correlation coefficient \rho defined for two random variables X and Y as

\rho = \displaystyle \frac{E[(X - m_X)(Y - m_Y)]}{\sigma_X \sigma_Y}, \hfill (1)

where m_X and m_Y are the mean values of X and Y, and \sigma_X and \sigma_Y are the standard deviations of X and Y. That is,

m_X = E[X] \hfill (2)

m_Y = E[Y] \hfill (3)

\sigma_X^2 = E[(X-m_X)^2] \hfill (4)

\sigma_Y^2 = E[(Y-m_Y)^2] \hfill (5)

So the correlation coefficient is the covariance between X and Y divided by the geometric mean of the variances of X and Y.

Continue reading “The Spectral Coherence Function”

CSP Estimators: The Time Smoothing Method

In a previous post, we introduced the frequency-smoothing method (FSM) of spectral correlation function (SCF) estimation. The FSM convolves a pulse-like smoothing window g(f) with the cyclic periodogram to form an estimate of the SCF. An advantage of the method is that is allows fine control over the spectral resolution of the SCF estimate through the choice of g(f), but the drawbacks are that it requires a Fourier transform as long as the data-record undergoing processing, and the convolution can be expensive. However, the expense of the convolution can be mitigated by using rectangular g(f).

In this post, we introduce the time-smoothing method (TSM) of SCF estimation. Instead of averaging (smoothing) the cyclic periodogram over spectral frequency, multiple cyclic periodograms are averaged over time. When the non-conjugate cycle frequency of zero is used, this method produces an estimate of the power spectral density, and is essentially the Bartlett spectrum estimation method. The TSM can be found in My Papers [6] (Eq. (54)), and other places in the literature.

Continue reading “CSP Estimators: The Time Smoothing Method”

Signal Selectivity

In this post I describe and illustrate the most important property of cyclostationary statistics: signal selectivity. The idea is that the cyclostationary parameters for a single signal can be estimated for that signal even when it is corrupted by strong noise and cochannel interferers. Cochannel means that the interferer occupies a frequency band that partially or completely overlaps the frequency band for the signal of interest.

A mixture of signals, whether cochannel or not, is modeled by the simple sum of the signals, as in

x(t) = s_1(t) + s_2(t) + \ldots + s_K(t) + w(t), \hfill (1)

where w(t) is additive noise. We can write this more compactly as

x(t) = \displaystyle \sum_{k=1}^K s_k(t) + w(t). \hfill (2)

Continue reading “Signal Selectivity”

Introduction to Higher-Order Cyclostationarity

We’ve seen how to define second-order cyclostationarity in the time- and frequency-domains, and we’ve looked at ideal and estimated spectral correlation functions for a synthetic rectangular-pulse BPSK signal. In future posts, we’ll look at how to create simple spectral correlation estimators, but in this post I want to introduce the topic of higher-order cyclostationarity (HOCS).  This post is more conceptual in nature; for mathematical details about HOCS, see the posts on cyclic cumulants and cyclic polyspectra. Estimators of higher-order parameters, such as cyclic cumulants and cyclic moments, are discussed in this post.

To contrast with HOCS, we’ll refer to second-order parameters such as the cyclic autocorrelation and the spectral correlation function as parameters of second-order cyclostationarity (SOCS).

The first question we might ask is Why do we care about HOCS? And one answer is that SOCS does not provide all the statistical information about a signal that we might need to perform some signal-processing task. There are two main limitations of SOCS that drive us to HOCS.

Continue reading “Introduction to Higher-Order Cyclostationarity”