Gaussian and binary signals are in some sense at opposite ends of the pure-impure sine-wave spectrum.
Remember when we derived the cumulant as the solution to the pure th-order sine-wave problem? It sounded good at the time, I hope. But here I describe a curious special case where the interpretation of the cumulant as the pure component of a nonlinearly generated sine wave seems to break down.
Spread-spectrum signals are used to enable shared-bandwidth communication systems (CDMA), precision position estimation (GPS), and secure wireless data transmission.
In this post we look at direct-sequence spread-spectrum (DSSS) signals, which can be usefully modeled as a kind of PSK signal. DSSS signals are used in a variety of real-world situations, including the familiar CDMA and WCDMA signals, covert signaling, and GPS. My colleague Antonio Napolitano has done some work on a large class of DSSS signals (The Literature [R11, R17, R95]), resulting in formulas for their spectral correlation functions, and I’ve made some remarks about their cyclostationary properties myself here and there (My Papers [16]).
Since I wrote the paper review in this post, I’ve analyzed three of O’Shea’s data sets (O’Shea is with the company DeepSig, so I’ve been referring to the data sets as DeepSig’s in other posts): All BPSK Signals, More on DeepSig’s Data Sets, and DeepSig’s 2018 Data Set. The data set relating to this paper is analyzed in All BPSK Signals. Preview: It is heavily flawed.
Modulation recognition is the process of assigning one or more modulation-class labels to a provided time-series data sequence.
In this post, we start a discussion of what I consider the ultimate application of the theory of cyclostationary signals: Automatic Modulation Recognition. My relevant papers are My Papers [16,17,25,26,28,30,32,33,38,43,44]. See also my machine-learning modulation-recognition critiques by clicking on Machine Learning in the CSP Blog Categories on the right side of any post or page.
Higher-order statistics in the frequency domain for cyclostationary signals. As complicated as it gets at the CSP Blog.
In this post we take a first look at the spectral parameters of higher-order cyclostationarity (HOCS). In previous posts, I have introduced the topic of HOCS and have looked at the temporal parameters, such as cyclic cumulants and cyclic moments. Those temporal parameters have proven useful in modulation classification and parameter estimation settings, and will likely be an important part of my ultimate radio-frequency scene analyzer.
The spectral parameters of HOCS have not proven to be as useful as the temporal parameters unless you include the trivial case where the moment/cumulant order is equal to two. In that case, the spectral parameters reduce to the spectral correlation function, which is extremely useful in CSP (see the TDOA and signal-detection posts for examples).
Update: See also some other reviews/take-downs of cyclic correntropy on the CSP Blog here and here.
I recently came across a published paper with the title Cyclostationary Correntropy: Definition and Application, by Aluisio Fontes et al. It is published in a journal called Expert Systems with Applications (Elsevier). Actually, it wasn’t the first time I’d seen this work by these authors. I had reviewed a similar paper in 2015 for a different journal.
I was surprised to see the paper published because I had a lot of criticisms of the original paper, and the other reviewers agreed since the paper was rejected. So I did my job, as did the other reviewers, and we tried to keep a flawed paper from entering the literature, where it would stay forever causing problems for readers.
The editor(s) of the journal Expert Systems with Applications did not ask me to review the paper, so I couldn’t give them the benefit of the work I already put into the manuscript, and apparently the editor(s) did not themselves see sufficient flaws in the paper to merit rejection.
It stings, of course, when you submit a paper that you think is good, and it is rejected. But it also stings when a paper you’ve carefully reviewed, and rejected, is published anyway.
Fortunately I have the CSP Blog, so I’m going on another rant. After all, I already did this the conventional rant-free way.
I came across a paper by Cohen and Eldar, researchers at the Technion in Israel. You can get the paper on the Arxiv site here. The title is “Sub-Nyquist Cyclostationary Detection for Cognitive Radio,” and the setting is spectrum sensing for cognitive radio. I have a question about the paper that I’ll ask below.
PSK and QAM signals form the building blocks for a large number of practical real-world signals. Understanding their probability structure is crucial to understanding those more complicated signals.
Let’s look into the statistical properties of a class of textbook signals that encompasses digital quadrature amplitude modulation (QAM), phase-shift keying (PSK), and pulse-amplitude modulation (PAM). I’ll call the class simply digital QAM (DQAM), and all of its members have an analytical-signal mathematical representation of the form
In this model, is the symbol index, is the symbol rate, is the carrier frequency (sometimes called the carrier frequency offset), is the symbol-clock phase, and is the carrier phase. The finite-energy function is the pulse function (sometimes called the pulse-shaping function). Finally, the random variable is called the symbol, and has a discrete distribution that is called the constellation.
Model (1) is a textbook signal when the sequence of symbols is independent and identically distributed (IID). This condition rules out real-world communication aids such as periodically transmitted bursts of known symbols, adaptive modulation (where the constellation may change in response to the vagaries of the propagation channel), some forms of coding, etc. Also, when the pulse function is a rectangle (with width ), the signal is even less realistic, and therefore more textbooky.
We will look at the moments and cumulants of this general model in this post. Although the model is textbook, we could use it as a building block to form more realistic, less textbooky, signal models. Then we could find the cyclostationarity of those models by applying signal-processing transformation rules that define how the cumulants of the output of a signal processor relate to those for the input.
How does the cyclostationarity of a signal change when it is subjected to common signal-processing operations like addition, multiplication, and convolution?
It is often useful to know how a signal processing operation affects the probabilistic parameters of a random signal. For example, if I know the power spectral density (PSD) of some signal , and I filter it using a linear time-invariant transformation with impulse response function, producing the output , then what is the PSD of ? This input-output relationship is well known and quite useful. The relationship is
Because the mathematical models of real-world communication signals can be constructed by subjecting idealized textbook signals to various signal-processing operations, such as filtering, it is of interest to us here at the CSP Blog to know how the spectral correlation function of the output of a signal processor is related to the spectral correlation function for the input. Similarly, we’d like to know such input-output relationships for the cyclic cumulants and the cyclic polyspectra.
Another benefit of knowing these CSP input-output relationships is that they tend to build insight into the meaning of the probabilistic parameters. For example, in the PSD input-output relationship (1), we already know that the transfer function at scales the input frequency component at by the complex number . So it makes sense that the PSD at is scaled by the squared magnitude of . If the filter transfer function is zero at , then the density of averaged power at should vanish too.
So, let’s look at this kind of relationship for CSP parameters. All of these results can be found, usually with more mathematical detail, in My Papers [6, 13].
CSP shines when the problem involves strong noise or cochannel interference. Here we look at CSP-based signal-presence detection as a function of SNR and SIR.
Let’s take a look at a class of signal-presence detectors that exploit cyclostationarity and in doing so illustrate the good things that can happen with CSP whenever cochannel interference is present, or noise models deviate from simple additive white Gaussian noise (AWGN). I’m referring to the cycle detectors, the first CSP algorithms I ever studied (My Papers [1,4]).
So why do I obsess over cyclostationary signals and cyclostationary signal processing? What’s the big deal, in the end? In this post I discuss my view of the ultimate use of cyclostationary signal processing (CSP): Radio-Frequency Scene Analysis (RFSA). Eventually, I hope to create a kind of Star Trek Tricorder for RFSA.
Time-delay estimation can be used to determine the angle-of-arrival of a signal impinging on two spatially separated signals. This estimation problem gets hard when there is cochannel interference present.
Let’s discuss an application of cyclostationary signal processing (CSP): time-delay estimation. The idea is that sampled data is available from two antennas (sensors), and there is a common signal component in each data set. The signal component in one data set is the time-delayed or time-advanced version of the component in the other set. This can happen when a plane-wave radio frequency (RF) signal propagates and impinges on the two antennas. In such a case, the RF signal arrives at the sensors with a time difference proportional to the distance between the sensors along the direction of propagation, and so the time-delay estimation is also commonly referred to as time-difference-of-arrival (TDOA) estimation.
Figure 1. Illustration of the geometric relationship between a transmitter and two receivers in the context of time-delay estimation (or time-difference-of-arrival estimation).
Consider the diagram shown in Figure 1. A distant transmitter emits a signal that is well-modeled as a plane wave once it reaches our two receivers. An individual wavefront of the signal arrives at the two sensors at different times.
The line segment AB is perpendicular to the direction of propagation for the RF signal. The angle is called the angle of arrival (AOA). If we could estimate the AOA, we can tell the direction from which the signal arrives, which could be useful in a variety of settings. Since the triangle ABC is a right triangle, we have
When , the wavefronts first strike receiver 2, then must propagate over meters before striking receiver 1. On the other hand, when , each wavefront strikes the two receivers simultaneously. In the former case, the TDOA is maximum, and in the latter it is zero. The TDOA can be negative too, so that azimuthal degrees can be determined by estimating the TDOA.
In general, the wavefront must traverse meters between striking receiver 2 and striking receiver 1,
Assuming the speed of propagation is meters/sec, the TDOA is given by
In this post I’ll review several methods of TDOA estimation, some of which employ CSP and some of which do not. We’ll see some of the advantages and disadvantages of the various classes of methods through inspection, simulation, and application to captured data. Consider this post as a starting point to a study or development effort rather than as a definitive performance characterization.
Using complex-valued signal representations is convenient but also has complications: You have to consider all possible choices for conjugating different factors in a moment.
When we considered complex-valued signals and second-order statistics, we ended up with two kinds of parameters: non-conjugate and conjugate. So we have the non-conjugate autocorrelation, which is the expected value of the normal second-order lag product in which only one of the factors is conjugated (consistent with the normal definition of variance for complex-valued random variables),
and the conjugate autocorrelation, which is the expected value of the second-order lag product in which neither factor is conjugated
The complex-valued Fourier-series amplitudes of these functions of time are the non-conjugate and conjugate cyclic autocorrelation functions, respectively.
I never explained the fundamental reason why both the non-conjugate and conjugate functions are needed. In this post, I rectify that omission. The reason for the many different choices of conjugated factors in higher-order cyclic moments and cumulants is also provided. These choices of conjugation configurations, or conjugation patterns, also appear in the more conventional theory of higher-order statistics as applied to stationary signals.
Cyclic cumulants are the amplitudes of the Fourier-series components of the time-varying cumulant function for a cyclostationary signal. They degenerate to conventional cumulants when the signal is stationary.
In this post I continue the development of the theory of higher-order cyclostationarity (My Papers [5,6]) that I began here. It is largely taken from my doctoral work (download my dissertation here).
This is a long post. To make it worthwhile, I’ve placed some movies of cyclic-cumulant estimates at the end. Or just skip to the end now if you’re impatient!
Recall that in the post introducing higher-order cyclostationarity, I mentioned that one encounters a bit of a puzzle when attempting to generalize experience with second-order cyclostationarity to higher orders. This is the puzzle of pure sine waves (My Papers [5]). Let’s look at pure and impure sine waves, and see how they lead to the probabilistic parameters widely known as cyclic cumulants.
What factors influence the quality of a spectral correlation function estimate?
The two non-parametric spectral-correlation estimators we’ve looked at so far–the frequency-smoothing and time-smoothing methods–require the choice of key estimator parameters. These are the total duration of the processed data block, , and the spectral resolution.
For the frequency-smoothing method (FSM), an FFT with length equal to the data-block length is required, and the spectral resolution is equal to the width of the smoothing function . For the time-smoothing method (TSM), multiple FFTs with lengths are required, and the frequency resolution is (in normalized frequency units).
The choice for the block length is partially guided by practical concerns, such as computational cost and whether the signal is persistent or transient in nature, and partially by the desire to obtain a reliable (low-variance) spectral correlation estimate. The choice for the frequency (spectral) resolution is typically guided by the desire for a reliable estimate.
Cross correlation functions can be normalized to create correlation coefficients. The spectral correlation function is a cross correlation and its correlation coefficient is called the coherence.
In this post I introduce the spectral coherence function, or just coherence. It deserves its own post because the coherence is a useful detection statistic for blindly determining significant cycle frequencies of arbitrary data records. See the posts on the strip spectral correlation analyzer and the FFT accumulation method for examples.
Let’s start with reviewing the standard correlation coefficient defined for two random variables and as
where and are the mean values of and , and and are the standard deviations of and . That is,
So the correlation coefficient is the covariance between and divided by the geometric mean of the variances of and .
Why do we need or care about higher-order cyclostationarity? Because second-order cyclostationarity is insufficient for our signal-processing needs in some important cases.
To contrast with HOCS, we’ll refer to second-order parameters such as the cyclic autocorrelation and the spectral correlation function as parameters of second-order cyclostationarity (SOCS).
The first question we might ask is Why do we care about HOCS? And one answer is that SOCS does not provide all the statistical information about a signal that we might need to perform some signal-processing task. There are two main limitations of SOCS that drive us to HOCS.