Why does zero-padding help in various estimators of the spectral correlation and spectral coherence functions?
Update to the exchange: May 7, 2021.May 14, 2021.
Reader Clint posed a great question about zero-padding in the frequency-smoothing method (FSM) of spectral correlation function estimation. The question prompted some pondering on my part, and I went ahead and did some experiments with the FSM to illustrate my response to Clint. The exchange with Clint (ongoing!) was deep and detailed enough that I thought it deserved to be seen by other CSP-Blog readers. One of the problems with developing material, or refining explanations, in the Comments sections of the CSP Blog is that these sections are not nearly as visible in the navigation tools featured on the Blog as are the Posts and Pages.
Learning machine learning for radio-frequency signal-processing problems, continued.
I continue with my foray into machine learning (ML) by considering whether we can use widely available ML tools to create a machine that can output accurate power spectrum estimates. Previously we considered the perhaps simpler problem of learning the Fourier transform. See here and here.
Along the way I’ll expose my ignorance of the intricacies of machine learning and my apparent inability to find the correct hyperparameter settings for any problem I look at. But, that’s where you come in, dear reader. Let me know what to do!
The costs strongly depend on whether you have prior cycle-frequency information or not.
Let’s look at the computational costs for spectral-correlation analysis using the three main estimators I’ve previously described on the CSP Blog: the frequency-smoothing method (FSM), the time-smoothing method (TSM), and the strip spectral correlation analyzer (SSCA).
We’ll see that the FSM and TSM are the low-cost options when estimating the spectral correlation function for a few cycle frequencies and that the SSCA is the low-cost option when estimating the spectral correlation function for many cycle frequencies. That is, the TSM and FSM are good options for directed analysis using prior information (values of cycle frequencies) and the SSCA is a good option for exhaustive blind analysis, for which there is no prior information available.
Unlike conventional spectrum analysis for stationary signals, CSP has three kinds of resolutions that must be considered in all CSP applications, not just two.
In this post, we look at the ability of various CSP estimators to distinguish cycle frequencies, temporal changes in cyclostationarity, and spectral features. These abilities are quantified by the resolution properties of CSP estimators.
Then the temporal resolution of the estimate is approximately , the cycle-frequency resolution is about , and the spectral resolution depends strongly on the particular estimator and its parameters. The resolution product was discussed in this post. The fundamental result for the resolution product is that it must be very much larger than unity in order to obtain an SCF estimate with low variance.
Radio-frequency scene analysis is much more complex than modulation recognition. A good first step is to blindly identify the frequency intervals for which significant non-noise energy exists.
In this post, I discuss a signal-processing algorithm that has almost nothing to do with cyclostationary signal processing (CSP). Almost. The topic is automatic spectral segmentation, which I also call band-of-interest (BOI) detection. When attempting to perform automatic radio-frequency scene analysis (RFSA), we may be confronted with a data block that contains multiple signals in a number of distinct frequency subbands. Moreover, these signals may be turning on and off within the data block. To apply our cyclostationary signal processing tools effectively, we would like to isolate these signals in time and frequency to the greatest extent possible using linear time-invariant filtering (for separating in the frequency dimension) and time-gating (for separating in the time dimension). Then the isolated signal components can be processed serially using CSP.
It is very important to remember that even perfect spectral and temporal segmentation will not solve the cochannel-signal problem. It is perfectly possible that an isolated subband will contain more than one cochannel signal.
The basics of my BOI-detection approach are published in a 2007 conference paper (My Papers [32]). I’ll describe this basic approach, illustrate it with examples relevant to RFSA, and also provide a few extensions of interest, including one that relates to cyclostationary signal processing.
The periodogram is the scaled magnitude-squared finite-time Fourier transform of something. It is as random as its input–it never converges to the power spectrum.
I’ve been reviewing a lot of technical papers lately and I’m noticing that it is becoming common to assert that the limiting form of the periodogram is the power spectral density or that the limiting form of the cyclic periodogram is the spectral correlation function. This isn’t true. These functions do not become, in general, less random (erratic) as the amount of data that is processed increases without limit. On the contrary, they always have large variance. Some form of averaging (temporal or spectral) is needed to permit the periodogram to converge to the power spectrum or the cyclic periodogram to converge to the spectral correlation function (SCF).
In particular, I’ve been seeing things like this:
where is the Fourier transform of on . In other words, the usual cyclic periodogram we talk about here on the CSP blog. See, for example, The Literature [R71], Equation (3).
Higher-order statistics in the frequency domain for cyclostationary signals. As complicated as it gets at the CSP Blog.
In this post we take a first look at the spectral parameters of higher-order cyclostationarity (HOCS). In previous posts, I have introduced the topic of HOCS and have looked at the temporal parameters, such as cyclic cumulants and cyclic moments. Those temporal parameters have proven useful in modulation classification and parameter estimation settings, and will likely be an important part of my ultimate radio-frequency scene analyzer.
The spectral parameters of HOCS have not proven to be as useful as the temporal parameters unless you include the trivial case where the moment/cumulant order is equal to two. In that case, the spectral parameters reduce to the spectral correlation function, which is extremely useful in CSP (see the TDOA and signal-detection posts for examples).
Time-delay estimation can be used to determine the angle-of-arrival of a signal impinging on two spatially separated signals. This estimation problem gets hard when there is cochannel interference present.
Let’s discuss an application of cyclostationary signal processing (CSP): time-delay estimation. The idea is that sampled data is available from two antennas (sensors), and there is a common signal component in each data set. The signal component in one data set is the time-delayed or time-advanced version of the component in the other set. This can happen when a plane-wave radio frequency (RF) signal propagates and impinges on the two antennas. In such a case, the RF signal arrives at the sensors with a time difference proportional to the distance between the sensors along the direction of propagation, and so the time-delay estimation is also commonly referred to as time-difference-of-arrival (TDOA) estimation.
Figure 1. Illustration of the geometric relationship between a transmitter and two receivers in the context of time-delay estimation (or time-difference-of-arrival estimation).
Consider the diagram shown in Figure 1. A distant transmitter emits a signal that is well-modeled as a plane wave once it reaches our two receivers. An individual wavefront of the signal arrives at the two sensors at different times.
The line segment AB is perpendicular to the direction of propagation for the RF signal. The angle is called the angle of arrival (AOA). If we could estimate the AOA, we can tell the direction from which the signal arrives, which could be useful in a variety of settings. Since the triangle ABC is a right triangle, we have
When , the wavefronts first strike receiver 2, then must propagate over meters before striking receiver 1. On the other hand, when , each wavefront strikes the two receivers simultaneously. In the former case, the TDOA is maximum, and in the latter it is zero. The TDOA can be negative too, so that azimuthal degrees can be determined by estimating the TDOA.
In general, the wavefront must traverse meters between striking receiver 2 and striking receiver 1,
Assuming the speed of propagation is meters/sec, the TDOA is given by
In this post I’ll review several methods of TDOA estimation, some of which employ CSP and some of which do not. We’ll see some of the advantages and disadvantages of the various classes of methods through inspection, simulation, and application to captured data. Consider this post as a starting point to a study or development effort rather than as a definitive performance characterization.
Use this post to help check the accuracy of your second-order CSP estimators.
Update September 2022: New section on the non-conjugate and conjugate coherence function.
***
In this post I provide some tools for the do-it-yourself CSP practitioner. One of the goals of this blog is to help new CSP researchers and students to write their own estimators and algorithms. This post contains some spectral correlation function and cyclic autocorrelation function estimates and numerically evaluated formulas that can be compared to those produced by anybody’s code.
The signal of interest is, of course, our rectangular-pulse BPSK signal with symbol rate (normalized frequency units) and carrier offset . You can download a MATLAB script for creating such a signal here.
The formula for the SCF for a textbook BPSK signal is published in several places (The Literature [R47], My Papers [6]) and depends mainly on the Fourier transform of the pulse function used by the textbook signal.
We’ll compare the numerically evaluated spectral correlation formula with estimates produced by my version of the frequency-smoothing method (FSM). The FSM estimates and the theoretical functions are contained in a MATLAB mat file here. (I had to change the extension of the mat file from .mat to .doc to allow posting it to WordPress–change it back after downloading. It is a zipped .mat file as of 12/2/22.) In all the results shown here and that you can download, the processed data-block length is samples and the FSM smoothing width is Hz. A rectangular smoothing window is used. For all cycle frequencies except zero (non-conjugate), a zero-padding factor of two is used in the FSM.
For the cyclic autocorrelation, we provide estimates using two methods: inverse Fourier transformation of the spectral correlation estimate and direct averaging of the second-order lag product in the time domain.
Pictures are worth N words, and M equations, where N and M are large integers.
In this post I provide plots of the spectral correlation for a variety of simulated textbook signals and several captured communication signals. The plots show the variety of cycle-frequency patterns that arise from the disparate approaches to digital communication signaling. The distinguishability of these patterns, combined with the inability to distinguish based on the power spectrum, leads to a powerful set of classification (modulation recognition) features (My Papers [16, 25, 26, 28]).
In all cases, the cycle frequencies are blindly estimated by the strip spectral correlation analyzer (The Literature [R3, R4]) and the estimates used by the FSM to compute the spectral correlation function. MATLAB is then used to plot the magnitude of the spectral correlation and conjugate spectral correlation, as specified by the determined non-conjugate and conjugate cycle frequencies.
There are three categories of signal types in this gallery: textbook signals, captured signals, and feature-rich signals. The latter comprises some captured signals (e.g., LTE) and some simulated radar signals. For the first two signal categories, the three-dimensional surface plots I’ve been using will suffice for illustrating the cycle-frequency patterns and the behavior of the spectral correlation function over frequency. But for the last category, the number of cycle frequencies is so large that the three-dimensional surface is difficult to interpret–it is a visual mess. For these signals, I’ll plot the maximum spectral correlation magnitude over spectral frequency versus the detected cycle frequency (as in this post).
The non-blind spectral-correlation estimator called the FSM is favored when one wishes to have fine control over frequency resolution and can tolerate long FFTs.
In this post I describe a basic estimator for the spectral correlation function (SCF): the frequency-smoothing method (FSM). The FSM is a way to estimate the SCF for a single value of cycle frequency. Recall from the basic theory of the cyclic autocorrelation and SCF that the SCF is obtained by infinite-time averaging of the cyclic periodogram or by infinitesimal-resolution frequency averaging of the cyclic periodogram. The FSM is merely a finite-time/finite-resolution approximation to the SCF definition.
One place the FSM can be found is in (My Papers [6]), where I introduce time-smoothed and frequency-smoothed higher-order cyclic periodograms as estimators of the cyclic polyspectrum. When the cyclic polyspectrum order is set to , the cyclic polyspectrum becomes the spectral correlation function, so the FSM discussed in this post is just a special case of the more general estimator in [6, Section VI.B].