We continue our basic signal-processing posts with one on the moving-average, or smoothing, filter. The moving-average filter is a linear time-invariant operation that is widely used to mitigate the effects of additive noise and other random disturbances from a presumably well-behaved signal. For example, a physical phenomenon may be producing a signal that increases monotonically over time, but our measurement of that signal is corrupted by noise, interference, or flaws in the measurement process. The moving-average filter can reveal the sought-after trend by suppressing the effects of the unwanted disturbances.
Why does zero-padding help in various estimators of the spectral correlation and spectral coherence functions?
Update to the exchange: May 7, 2021.May 14, 2021.
Reader Clint posed a great question about zero-padding in the frequency-smoothing method (FSM) of spectral correlation function estimation. The question prompted some pondering on my part, and I went ahead and did some experiments with the FSM to illustrate my response to Clint. The exchange with Clint (ongoing!) was deep and detailed enough that I thought it deserved to be seen by other CSP-Blog readers. One of the problems with developing material, or refining explanations, in the Comments sections of the CSP Blog is that these sections are not nearly as visible in the navigation tools featured on the Blog as are the Posts and Pages.
Spectral correlation surfaces for real-valued and complex-valued versions of the same signal look quite different.
In the real world, the electromagnetic field is a multi-dimensional time-varying real-valued function (volts/meter or newtons/coulomb). But in mathematical physics and signal processing, we often use complex-valued representations of the field, or of quantities derived from it, to facilitate our mathematics or make the signal processing more compact and efficient.
So throughout the CSP Blog I’ve focused almost exclusively on complex-valued signals and data. However, there is a considerable older literature that uses real-valued signals, such as The Literature [R1, R151]. You can use either real-valued or complex-valued signal representations and data, as you prefer, but there are advantages and disadvantages to each choice. Moreover, an author might not be perfectly clear about which one is used, especially when presenting a spectral correlation surface (as opposed to a sequence of equations, where things are often more clear).
This installment of the Signal Processing Toolkit series of CSP Blog posts deals with the ubiquitous signal-processing operation known as convolution. We originally came across it in the context of linear time-invariant systems. In this post, we focus on the mechanics of computing convolutions and discuss their utility in signal processing and CSP.
This installment of the Signal Processing Toolkit shows how the Fourier series arises from a consideration of representing arbitrary signals as vectors in a signal space. We also provide several examples of Fourier series calculations, interpret the Fourier series, and discuss its relevance to cyclostationary signal processing.
And I still don’t understand how a random variable with infinite variance can be a good model for anything physical. So there.
I’ve seen several published and pre-published (arXiv.org) technical papers over the past couple of years on the topic of cyclic correntropy (The Literature [R123-R127]). I first criticized such a paper ([R123]) here, but the substance of that review was about my problems with the presented mathematics, not impulsive noise and its effects on CSP. Since the papers keep coming, apparently, I’m going to put down some thoughts on impulsive noise and some evidence regarding simple means of mitigation in the context of CSP. Preview: I don’t think we need to go to the trouble of investigating cyclic correntropy as a means of salvaging CSP from the evil clutches of impulsive noise.
What modest academic success I’ve had in the area of cyclostationary signal theory and cyclostationary signal processing is largely due to the patient mentorship of my doctoral adviser, William (Bill) Gardner, and the fact that I was able to build on an excellent foundation put in place by Gardner, his advisor Lewis Franks, and key Gardner students such as William (Bill) Brown.
Learning machine learning for radio-frequency signal-processing problems, continued.
I continue with my foray into machine learning (ML) by considering whether we can use widely available ML tools to create a machine that can output accurate power spectrum estimates. Previously we considered the perhaps simpler problem of learning the Fourier transform. See here and here.
Along the way I’ll expose my ignorance of the intricacies of machine learning and my apparent inability to find the correct hyperparameter settings for any problem I look at. But, that’s where you come in, dear reader. Let me know what to do!
An alternative to the strip spectral correlation analyzer.
Let’s look at another spectral correlation function estimator: the FFT Accumulation Method (FAM). This estimator is in the time-smoothing category, is exhaustive in that it is designed to compute estimates of the spectral correlation function over its entire principal domain, and is efficient, so that it is a competitor to the Strip Spectral Correlation Analyzer (SSCA) method. I implemented my version of the FAM by using the paper by Roberts et al (The Literature [R4]). If you follow the equations closely, you can successfully implement the estimator from that paper. The tricky part, as with the SSCA, is correctly associating the outputs of the coded equations to their proper values.
Reconsidering my first attempt at teaching a machine the Fourier transform with the help of a CSP Blog reader. Also, the Fourier transform is viewed by Machine Learners as an input data representation, and that representation matters.
I first considered whether a machine (neural network) could learn the (64-point, complex-valued) Fourier transform in this post. I used MATLAB’s Neural Network Toolbox and I failed to get good learning results because I did not properly set the machine’s hyperparameters. A kind reader named Vito Dantona provided a comment to that original post that contained good hyperparameter selections, and I’m going to report the new results here in this post.
Since the Fourier transform is linear, the machine should be set up to do linear processing. It can’t just figure that out for itself. Once I used Vito’s suggested hyperparameters to force the machine to be linear, the results became much better:
The costs strongly depend on whether you have prior cycle-frequency information or not.
Let’s look at the computational costs for spectral-correlation analysis using the three main estimators I’ve previously described on the CSP Blog: the frequency-smoothing method (FSM), the time-smoothing method (TSM), and the strip spectral correlation analyzer (SSCA).
We’ll see that the FSM and TSM are the low-cost options when estimating the spectral correlation function for a few cycle frequencies and that the SSCA is the low-cost option when estimating the spectral correlation function for many cycle frequencies. That is, the TSM and FSM are good options for directed analysis using prior information (values of cycle frequencies) and the SSCA is a good option for exhaustive blind analysis, for which there is no prior information available.
Unlike conventional spectrum analysis for stationary signals, CSP has three kinds of resolutions that must be considered in all CSP applications, not just two.
In this post, we look at the ability of various CSP estimators to distinguish cycle frequencies, temporal changes in cyclostationarity, and spectral features. These abilities are quantified by the resolution properties of CSP estimators.
Then the temporal resolution of the estimate is approximately , the cycle-frequency resolution is about , and the spectral resolution depends strongly on the particular estimator and its parameters. The resolution product was discussed in this post. The fundamental result for the resolution product is that it must be very much larger than unity in order to obtain an SCF estimate with low variance.
How do we efficiently estimate higher-order cyclic cumulants? The basic answer is first estimate cyclic moments, then combine using the moments-to-cumulants formula.
In this post we discuss ways of estimating -th order cyclic temporal moment and cumulant functions. Recall that for , cyclic moments and cyclic cumulants are usually identical. They differ when the signal contains one or more finite-strength additive sine-wave components. In the common case when such components are absent (as in our recurring numerical example involving rectangular-pulse BPSK), they are equal and they are also equal to the conventional cyclic autocorrelation function provided the delay vector is chosen appropriately. That is, the two-dimensional delay vector is set equal to .
The more interesting case is when the order is greater than two. Most communication signal models possess odd-order moments and cumulants that are identically zero, so the first non-trivial order greater than two is four. Our estimation task is to estimate -th order temporal moment and cumulant functions for using a sampled-data record of length .
Radio-frequency scene analysis is much more complex than modulation recognition. A good first step is to blindly identify the frequency intervals for which significant non-noise energy exists.
In this post, I discuss a signal-processing algorithm that has almost nothing to do with cyclostationary signal processing (CSP). Almost. The topic is automatic spectral segmentation, which I also call band-of-interest (BOI) detection. When attempting to perform automatic radio-frequency scene analysis (RFSA), we may be confronted with a data block that contains multiple signals in a number of distinct frequency subbands. Moreover, these signals may be turning on and off within the data block. To apply our cyclostationary signal processing tools effectively, we would like to isolate these signals in time and frequency to the greatest extent possible using linear time-invariant filtering (for separating in the frequency dimension) and time-gating (for separating in the time dimension). Then the isolated signal components can be processed serially using CSP.
It is very important to remember that even perfect spectral and temporal segmentation will not solve the cochannel-signal problem. It is perfectly possible that an isolated subband will contain more than one cochannel signal.
The basics of my BOI-detection approach are published in a 2007 conference paper (My Papers [32]). I’ll describe this basic approach, illustrate it with examples relevant to RFSA, and also provide a few extensions of interest, including one that relates to cyclostationary signal processing.
We are all susceptible to using bad mathematics to get us where we want to go. Here is an example.
I recently came across the 2014 paper in the title of this post. I mentioned it briefly in the post on the periodogram. But I’m going to talk about it a bit more here because this is the kind of thing that makes things harder for people trying to learn about cyclostationarity, which eventually leads to the need for something like the CSP Blog as a corrective.
The idea behind the paper is that it would be nice to avoid the need for prior knowledge of cycle frequencies when using cycle detectors or the like. If you could just compute the entire spectral correlation function, then collapse it by integrating (summing) over frequency , then you’d have a one-dimensional function of cycle frequency and you could then process that function inexpensively to perform detection and classification tasks.
Higher-order statistics in the frequency domain for cyclostationary signals. As complicated as it gets at the CSP Blog.
In this post we take a first look at the spectral parameters of higher-order cyclostationarity (HOCS). In previous posts, I have introduced the topic of HOCS and have looked at the temporal parameters, such as cyclic cumulants and cyclic moments. Those temporal parameters have proven useful in modulation classification and parameter estimation settings, and will likely be an important part of my ultimate radio-frequency scene analyzer.
The spectral parameters of HOCS have not proven to be as useful as the temporal parameters unless you include the trivial case where the moment/cumulant order is equal to two. In that case, the spectral parameters reduce to the spectral correlation function, which is extremely useful in CSP (see the TDOA and signal-detection posts for examples).
I came across a paper by Cohen and Eldar, researchers at the Technion in Israel. You can get the paper on the Arxiv site here. The title is “Sub-Nyquist Cyclostationary Detection for Cognitive Radio,” and the setting is spectrum sensing for cognitive radio. I have a question about the paper that I’ll ask below.
How does the cyclostationarity of a signal change when it is subjected to common signal-processing operations like addition, multiplication, and convolution?
It is often useful to know how a signal processing operation affects the probabilistic parameters of a random signal. For example, if I know the power spectral density (PSD) of some signal , and I filter it using a linear time-invariant transformation with impulse response function, producing the output , then what is the PSD of ? This input-output relationship is well known and quite useful. The relationship is
Because the mathematical models of real-world communication signals can be constructed by subjecting idealized textbook signals to various signal-processing operations, such as filtering, it is of interest to us here at the CSP Blog to know how the spectral correlation function of the output of a signal processor is related to the spectral correlation function for the input. Similarly, we’d like to know such input-output relationships for the cyclic cumulants and the cyclic polyspectra.
Another benefit of knowing these CSP input-output relationships is that they tend to build insight into the meaning of the probabilistic parameters. For example, in the PSD input-output relationship (1), we already know that the transfer function at scales the input frequency component at by the complex number . So it makes sense that the PSD at is scaled by the squared magnitude of . If the filter transfer function is zero at , then the density of averaged power at should vanish too.
So, let’s look at this kind of relationship for CSP parameters. All of these results can be found, usually with more mathematical detail, in My Papers [6, 13].