SPTK: The Analytic Signal and Complex Envelope

In signal processing, and in CSP, we often have to convert real-valued data into complex-valued data and vice versa. Real-valued data is in the real world, but complex-valued data is easier to process due to the use of a substantially lower sampling rate.

Previous SPTK Post: The Moving-Average Filter    Next SPTK Post: Random Variables

In this Signal-Processing Toolkit post, we review the signal-processing steps needed to convert a real-valued sampled-data bandpass signal to a complex-valued sampled-data lowpass signal. The former can arise from sampling a signal that has been downconverted from its radio-frequency spectral band to a much lower intermediate-frequency spectral band. So we want to convert such data to complex samples at zero frequency (‘complex baseband’) so we can decimate them and thereby match the sample rate to the signal’s baseband bandwidth. Subsequent signal-processing algorithms (including CSP of course) can then operate on the relatively low-rate complex-envelope data, which is beneficial because the same number of seconds of data can be processed using fewer samples, and computational cost is determined by the number of samples, not the number of seconds.

Continue reading “SPTK: The Analytic Signal and Complex Envelope”

SPTK: The Moving-Average Filter

A simple and useful example of a linear time-invariant system. Good for smoothing and discovering trends by averaging away noise.

Previous SPTK Post: Ideal Filters             Next SPTK Post: The Complex Envelope

We continue our basic signal-processing posts with one on the moving-average, or smoothing, filter. The moving-average filter is a linear time-invariant operation that is widely used to mitigate the effects of additive noise and other random disturbances from a presumably well-behaved signal. For example, a physical phenomenon may be producing a signal that increases monotonically over time, but our measurement of that signal is corrupted by noise, interference, or flaws in the measurement process. The moving-average filter can reveal the sought-after trend by suppressing the effects of the unwanted disturbances.

Continue reading “SPTK: The Moving-Average Filter”

SPTK: Ideal Filters

Ideal filters have rectangular or unit-step-like transfer functions and so are not physical. But they permit much insight into the analysis and design of real-world linear systems.

Previous SPTK Post: Convolution       Next SPTK Post: The Moving-Average Filter

We continue with our non-CSP signal-processing tool-kit series with this post on ideal filtering. Ideal filters are those filters with transfer functions that are rectangular, step-function-like, or combinations of rectangles and step functions.

Continue reading “SPTK: Ideal Filters”

Spectral Correlation and Cyclic Correlation Plots for Real-Valued Signals

Spectral correlation surfaces for real-valued and complex-valued versions of the same signal look quite different.

In the real world, the electromagnetic field is a multi-dimensional time-varying real-valued function (volts/meter or newtons/coulomb). But in mathematical physics and signal processing, we often use complex-valued representations of the field, or of quantities derived from it, to facilitate our mathematics or make the signal processing more compact and efficient.

So throughout the CSP Blog I’ve focused almost exclusively on complex-valued signals and data. However, there is a considerable older literature that uses real-valued signals, such as The Literature [R1, R151]. You can use either real-valued or complex-valued signal representations and data, as you prefer, but there are advantages and disadvantages to each choice. Moreover, an author might not be perfectly clear about which one is used, especially when presenting a spectral correlation surface (as opposed to a sequence of equations, where things are often more clear).

Continue reading “Spectral Correlation and Cyclic Correlation Plots for Real-Valued Signals”

SPTK: Convolution and the Convolution Theorem

Convolution is an essential element in everyone’s signal-processing toolkit. We’ll look at it in detail in this post.

Previous SPTK Post: Interconnection of Linear Systems      Next SPTK Post: Ideal Filters

This installment of the Signal Processing Toolkit series of CSP Blog posts deals with the ubiquitous signal-processing operation known as convolution. We originally came across it in the context of linear time-invariant systems. In this post, we focus on the mechanics of computing convolutions and discuss their utility in signal processing and CSP.

Continue reading “SPTK: Convolution and the Convolution Theorem”

SPTK: Frequency Response of LTI Systems

The frequency response of a filter tells you how it scales each and every input sine-wave or spectral component.

Previous SPTK Post: LTI Systems             Next SPTK Post: Interconnection of LTI Systems

We continue our progression of Signal-Processing ToolKit posts by looking at the frequency-domain behavior of linear time-invariant (LTI) systems. In the previous post, we established that the time-domain output of an LTI system is completely determined by the input and by the response of the system to an impulse input applied at time zero. This response is called the impulse response and is typically denoted by h(t).

Continue reading “SPTK: Frequency Response of LTI Systems”

SPTK: Linear Time-Invariant Systems

LTI systems, or filters, are everywhere in signal processing. They allow us to adjust the amplitudes and phases of spectral components of the input.

Previous SPTK Post: The Fourier Transform         Next SPTK Post: Frequency Response

In this Signal Processing Toolkit post, we’ll take a first look at arguably the most important class of system models: linear time-invariant (LTI) systems.

What do signal processors and engineers mean by system? Most generally, a system is a rule or mapping that associates one or more input signals to one or more output signals. As we did with signals, we discuss here various useful dichotomies that break up the set of all systems into different subsets with important properties–important to mathematical analysis as well as to design and implementation. Then we’ll look at time-domain input/output relationships for linear systems. In a future post we’ll look at the properties of linear systems in the frequency domain.

Continue reading “SPTK: Linear Time-Invariant Systems”

SPTK: The Fourier Transform

An indispensable tool in CSP and all of signal processing!

Previous SPTK Post: The Fourier Series      Next SPTK Post: Linear Systems

This post in the Signal Processing Toolkit series deals with a key mathematical tool in CSP: The Fourier transform. Let’s try to see how the Fourier transform arises from a limiting version of the Fourier series.

Continue reading “SPTK: The Fourier Transform”

SPTK: The Fourier Series

A crucial tool for developing the temporal parameters of CSP.

Previous SPTK Post: Signal Representations            Next SPTK Post: The Fourier Transform

This installment of the Signal Processing Toolkit shows how the Fourier series arises from a consideration of representing arbitrary signals as vectors in a signal space. We also provide several examples of Fourier series calculations, interpret the Fourier series, and discuss its relevance to cyclostationary signal processing.

Continue reading “SPTK: The Fourier Series”

Signal Processing Toolkit: Signals

Introducing the SPTK on the CSP Blog. Basic signal-processing tools with discussions of their connections to and uses in CSP.

Next SPTK Post: Signal Representations

This is the inaugural post of a new series of posts I’m calling the Signal Processing Toolkit (SPTK).  The SPTK posts will cover relatively simple topics in signal processing that are useful in the practice of cyclostationary signal processing. So, they are not CSP posts, but CSP practitioners need to know this material to be successful in CSP. The CSP Blog is branching out! (But don’t worry, there are more CSP posts coming too.)

Continue reading “Signal Processing Toolkit: Signals”

The Ambiguity Function and the Cyclic Autocorrelation Function: Are They the Same Thing?

To-may-to, to-mah-to?

Let’s talk about ambiguity and correlation. The ambiguity function is a core component of radar signal processing practice and theory. The autocorrelation function and the cyclic autocorrelation function, are key elements of generic signal processing and cyclostationary signal processing, respectively. Ambiguity and correlation both apply a quadratic functional to the data or signal of interest, and they both weight that quadratic functional by a complex exponential (sine wave) prior to integration or summation.

Are they the same thing? Well, my answer is both yes and no.

Continue reading “The Ambiguity Function and the Cyclic Autocorrelation Function: Are They the Same Thing?”

Can a Machine Learn a Power Spectrum Estimator?

Learning machine learning for radio-frequency signal-processing problems, continued.

I continue with my foray into machine learning (ML) by considering whether we can use widely available ML tools to create a machine that can output accurate power spectrum estimates. Previously we considered the perhaps simpler problem of learning the Fourier transform. See here and here.

Along the way I’ll expose my ignorance of the intricacies of machine learning and my apparent inability to find the correct hyperparameter settings for any problem I look at. But, that’s where you come in, dear reader. Let me know what to do!

Continue reading “Can a Machine Learn a Power Spectrum Estimator?”

A Challenge for the Machine Learners

The machine-learning modulation-recognition community consistently claims vastly superior performance to anything that has come before. Let’s test that.

Update February 2023: A third dataset has been posted here. This new dataset, CSPB.ML.2023, features cochannel signals.

Update April 2022: I’ve also posted a second dataset here. This new dataset is similar to the original ML Challenge dataset except the random variable representing the carrier frequency offset has a slightly different distribution.

If you refer to any of the posted datasets in a published paper, please use the following designators, which I am also using in papers I’m attempting to publish:

Original ML Challenge Dataset: CSPB.ML.2018.

Shifted ML Challenge Dataset: CSPB.ML.2022.

Cochannel ML Dataset: CSPB.ML.2023.

Update February 2019

I’ve decided to post the data set I discuss here to the CSP Blog for all interested parties to use. See the new post on the Data Set. If you do use it, please let me and the CSP Blog readers know how you fared with your experiments in the Comments section of either post. Thanks!

Continue reading “A Challenge for the Machine Learners”

Conjugation Configurations

Using complex-valued signal representations is convenient but also has complications: You have to consider all possible choices for conjugating different factors in a moment.

When we considered complex-valued signals and second-order statistics, we ended up with two kinds of parameters: non-conjugate and conjugate. So we have the non-conjugate autocorrelation, which is the expected value of the normal second-order lag product in which only one of the factors is conjugated (consistent with the normal definition of variance for complex-valued random variables),

\displaystyle R_x(t, \boldsymbol{\tau}) = E \left[ x(t+\tau_1)x^*(t+\tau_2) \right] \hfill (1)

and the conjugate autocorrelation, which is the expected value of the second-order lag product in which neither factor is conjugated

\displaystyle R_{x^*}(t, \boldsymbol{\tau}) = E \left[ x(t+\tau_1)x(t+\tau_2) \right]. \hfill (2)

The complex-valued Fourier-series amplitudes of these functions of time t are the non-conjugate and conjugate cyclic autocorrelation functions, respectively.

The Fourier transforms of the non-conjugate and conjugate cyclic autocorrelation functions are the non-conjugate and conjugate spectral correlation functions, respectively.

I never explained the fundamental reason why both the non-conjugate and conjugate functions are needed. In this post, I rectify that omission. The reason for the many different choices of conjugated factors in higher-order cyclic moments and cumulants is also provided. These choices of conjugation configurations, or conjugation patterns, also appear in the more conventional theory of higher-order statistics as applied to stationary signals.

Continue reading “Conjugation Configurations”