SPTK: Frequency Response of LTI Systems

Previous SPTK Post: LTI Systems             Next SPTK Post: Interconnection of LTI Systems

We continue our progression of Signal-Processing ToolKit posts by looking at the frequency-domain behavior of linear time-invariant (LTI) systems. In the previous post, we established that the time-domain output of an LTI system is completely determined by the input and by the response of the system to an impulse input applied at time zero. This response is called the impulse response and is typically denoted by h(t).

Continue reading “SPTK: Frequency Response of LTI Systems”

Symmetries of Higher-Order Temporal Probabilistic Parameters in CSP

In this post, we continue our study of the symmetries of CSP parameters. The second-order parameters–spectral correlation and cyclic correlation–are covered in detail in the companion post, including the symmetries for ‘auto’ and ‘cross’ versions of those parameters.

Here we tackle the generalizations of cyclic correlation: cyclic temporal moments and cumulants. We’ll deal with the generalization of the spectral correlation function, the  cyclic polyspectra, in a subsequent post. It is reasonable to me to focus first on the higher-order temporal parameters, because I consider the temporal parameters to be much more useful in practice than the spectral parameters.

This topic is somewhat harder and more abstract than the second-order topic, but perhaps there are bigger payoffs in algorithm development for exploiting symmetries in higher-order parameters than in second-order parameters because the parameters are multidimensional. So it could be worthwhile to sally forth.

Continue reading “Symmetries of Higher-Order Temporal Probabilistic Parameters in CSP”

CSP Resources: The Ultimate Guides to Cyclostationary Random Processes by Professor Napolitano

My friend and colleague Antonio Napolitano has just published a new book on cyclostationary signals and cyclostationary signal processing:

Cyclostationary Processes and Time Series: Theory, Applications, and Generalizations, Academic Press/Elsevier, 2020, ISBN: 978-0-08-102708-0. The book is a comprehensive guide to the structure of cyclostationary random processes and signals, and it also provides pointers to the literature on many different applications. The book is mathematical in nature; use it to deepen your understanding of the underlying mathematics that make CSP possible.

You can check out the book on amazon.com using the following link:

Cyclostationary Processes and Time Series

Continue reading “CSP Resources: The Ultimate Guides to Cyclostationary Random Processes by Professor Napolitano”

On Impulsive Noise, CSP, and Correntropy

I’ve seen several published and pre-published (arXiv.org) technical papers over the past couple of years on the topic of cyclic correntropy (The Literature [R123-R127]). I first criticized such a paper ([R123]) here, but the substance of that review was about my problems with the presented mathematics, not impulsive noise and its effects on CSP. Since the papers keep coming, apparently, I’m going to put down some thoughts on impulsive noise and some evidence regarding simple means of mitigation in the context of CSP. Preview: I don’t think we need to go to the trouble of investigating cyclic correntropy as a means of salvaging CSP from the clutches of impulsive noise.

Continue reading “On Impulsive Noise, CSP, and Correntropy”

Sponsoring the CSP Blog

I’ve decided to solicit donations to the CSP Blog through PayPal. For the past four years, I’ve been writing blog posts and doing my best to answer comments at no cost to my readers. And it has turned out very well indeed, thanks to all the people that stop by to read and contribute.

Continue reading “Sponsoring the CSP Blog”

For the Beginner at CSP

Here is a list of links to CSP Blog posts that I think are suitable for a beginner: read them in the order given.

How to Obtain Help from the CSP Blog

Introduction to CSP

How to Create a Simple Cyclostationary Signal: Rectangular-Pulse BPSK

The Cyclic Autocorrelation Function

The Spectral Correlation Function

The Cyclic Autocorrelation for BPSK

Continue reading “For the Beginner at CSP”

A Gallery of Cyclic Correlations

There are some situations in which the spectral correlation function is not the preferred measure of (second-order) cyclostationarity. In these situations, the cyclic autocorrelation (non-conjugate and conjugate versions) may be much simpler to estimate and work with in terms of detector, classifier, and estimator structures. So in this post, I’m going to provide plots of the cyclic autocorrelation for each of the signals in the spectral correlation gallery post. The exceptions are those signals I called feature-rich in the spectral correlation gallery post, such as LTE and radar. Recall that such signals possess a large number of cycle frequencies, and plotting their three-dimensional spectral correlation surface is not helpful as it is difficult to interpret with the human eye. So for the cycle-frequency patterns of feature-rich signals, we’ll rely on the stem-style (cyclic-domain profile) plots in the gallery post.

Continue reading “A Gallery of Cyclic Correlations”

On The Shoulders

What modest academic success I’ve had in the area of cyclostationary signal theory and cyclostationary signal processing is largely due to the patient mentorship of my doctoral adviser, William (Bill) Gardner, and the fact that I was able to build on an excellent foundation put in place by Gardner, his advisor Lewis Franks, and key Gardner students such as William (Bill) Brown.

Continue reading “On The Shoulders”

Simple Synchronization Using CSP

In this post I discuss the use of cyclostationary signal processing applied to communication-signal synchronization problems. First, just what are synchronization problems? Synchronize and synchronization have multiple meanings, but the meaning of synchronize that is relevant here is something like:

syn·chro·nize: To cause to occur or operate with exact coincidence in time or rate

If we have an analog amplitude-modulated (AM) signal (such as voice AM used in the AM broadcast bands) at a receiver we want to remove the effects of the carrier sine wave, resulting in an output that is only the original voice or music message. If we have a digital signal such as binary phase-shift keying (BPSK), we want to remove the effects of the carrier but also sample the message signal at the correct instants to optimally recover the transmitted bit sequence. 

Continue reading “Simple Synchronization Using CSP”

100,000 Page Views!

The CSP Blog has reached 100,000 page views! Also, a while back it passed the “20,000 visitors” milestone. All of this for 53 posts and 10 pages. More to come!

yearly_totals

I started the CSP Blog in late 2015, so it has taken a bit over three years to get to 100,000 views. I don’t know if that should be considered fast or slow. But I like it anyway.

I want to thank each and every one of the visitors to the CSP Blog. It has reached so many more people that I though it ever would when I started it.

Thank you for all your clicks, comments, emails, and downloads! If you’d like, leave a comment to this post if you have an idea for a post you’d like to see.

Below the fold, some graphics that show the vital statistics of the CSP Blog as of the 100,000 page-view milestone.

Continue reading “100,000 Page Views!”

Can a Machine Learn a Power Spectrum Estimator?

I continue with my foray into machine learning (ML) by considering whether we can use widely available ML tools to create a machine that can output accurate power spectrum estimates. Previously we considered the perhaps simpler problem of learning the Fourier transform. See here and here.

Along the way I’ll expose my ignorance of the intricacies of machine learning and my apparent inability to find the correct hyperparameter settings for any problem I look at. But, that’s where you come in, dear reader. Let me know what to do!

Continue reading “Can a Machine Learn a Power Spectrum Estimator?”

Data Set for the Machine-Learning Challenge

Update September 2020. I made a mistake when I created the signal-parameter “truth” files signal_record.txt and signal_record_first_20000.txt. Like the DeepSig RML data sets that I analyzed on the CSP Blog here and here, the SNR parameter in the truth files did not match the actual SNR of the signals in the data files. I’ve updated the truth files and the links below. You can still use the original files for all other signal parameters, but the SNR parameter was in error.

Update July 2020. I originally posted 20,000 signals in the posted data set. I’ve now added another 92,000 for a total of 112,000 signals. The original signals are contained in Batches 1-5, the additional signals in Batches 6-28. I’ve placed these additional Batches at the end of the post to preserve the original post’s content.

I’ve posted 20000 PSK/QAM signals to the CSP Blog. These are the signals I refer to in the post I wrote challenging the machine-learners. In this brief post, I provide links to the data and describe how to interpret the text file containing the signal-type labels and signal parameters.

Overview of Data Set

The 20,000 signals are stored in five zip files, each containing 4000 individual signal files:

Batch 1

Batch 2

Batch 3

Batch 4

Batch 5

The zip files are each about 1 GB in size.

The modulation-type labels for the signals, such as “BPSK” or “MSK,” are contained in the text file:

signal_record_first_20000.txt

Each signal file is stored in a binary format involving interleaved real and imaginary parts, which I call ‘.tim’ files. You can read a .tim file into MATLAB using read_binary.m. Or use the code inside read_binary.m to write your own data-reader; the format is quite simple.

The Label and Parameter File

Let’s look at the format of the truth/label file. The first line of signal_record_first_20000.txt is

1 bpsk  11  -7.4433467080e-04  9.8977795076e-01  10  9  5.4532617590e+00  0.0

which comprises 9 fields. All temporal and spectral parameters (times and frequencies) are normalized with respect to the sampling rate. In other words, the sampling rate can be taken to be unity in this data set. These fields are described in the following list:

  1. Signal index. In the case above this is `1′ and that means the file containing the signal is called signal_1.tim. In general, the nth signal is contained in the file signal_n.tim. The Batch 1 zip file contains signal_1.tim through signal_4000.tim.
  2. Signal type. A string indicating the modulation format of the signal in the file. For this data set, I’ve only got eight modulation types: BPSK, QPSK, 8PSK, \pi/4-DQPSK, 16QAM, 64QAM, 256QAM, and MSK. These are denoted by the strings bpsk, qpsk, 8psk, dqpsk, 16qam, 64qam, 256qam, and msk, respectively.
  3. Base symbol period. In the example above (line one of the truth file), the base symbol period is T_0 = 11.
  4. Carrier offset. In this case, it is -7.4433467080\times 10^{-4}.
  5. Excess bandwidth. The excess bandwidth parameter, or square-root raised-cosine roll-off parameter, applies to all of the signal types except MSK. Here it is 9.8977795076\times 10^{-1}. It can be any real number between 0.1 and 1.0.
  6. Upsample factor. The sixth field is an upsampling parameter U.
  7. Downsample factor. The seventh field is a downsampling parameter D. The actual symbol rate of the signal in the file is computed from the base symbol period, upsample factor, and downsample factor: \displaystyle f_{sym} = (1/T_0)*(D/U). So the BPSK signal in signal_1.tim has rate 0.08181818. If the downsample factor is zero in the truth-parameters file, no resampling was done to the signal.
  8. Inband SNR (dB). The ratio of the signal power to the noise power within the signal’s bandwidth, taking into account the signal type and the excess bandwidth parameter.
  9. Noise spectral density (dB). It is always 0 dB. So the various SNRs are generated by varying the signal power.

To ensure that you have correctly downloaded and interpreted my data files, I’m going to provide some PSD plots and a couple of the actual sample values for a couple of the files.

signal_1.tim

The line from the truth file is:

1 bpsk  11  -7.4433467080e-04  9.8977795076e-01  10  9  5.4532617590e+00  0.0

The first ten samples of the file are:

-5.703014e-02   -6.163056e-01
-1.285231e-01   -6.318392e-01
6.664069e-01    -7.007506e-02
7.731103e-01    -1.164615e+00
3.502680e-01    -1.097872e+00
7.825349e-01    -3.721564e-01
1.094809e+00    -3.123962e-01
4.146149e-01    -5.890701e-01
1.444665e+00    7.358724e-01
-2.217039e-01   -1.305001e+00

An FSM-based PSD estimate for signal_1.tim is:

psd_1

And the blindly estimated cycle frequencies (using the SSCA) are:

cfs_signal_1

The previous plot corresponds to the numerical values:

Non-conjugate (\alpha, C, S):

8.181762695e-02  7.480e-01  5.406e+00

Conjugate (\alpha, C, S):

8.032470942e-02  7.800e-01  4.978e+00
-1.493096002e-03  8.576e-01  1.098e+01
-8.331298083e-02  7.090e-01  5.039e+00

signal_4000.tim

The line from the truth file is

4000 256qam  9  8.3914849139e-04  7.2367959637e-01  9  8  1.0566301192e+01  0.0

which means the symbol rate is given by (1/9)*(8/9) = 0.09876543209. The carrier offset is 0.000839 and the excess bandwidth is 0.723. Because the signal type is 256QAM, it has a single (non-zero) non-conjugate cycle frequency of 0.098765 and no conjugate cycle frequencies. But the square of the signal has cycle frequencies related to the quadrupled carrier:

cfs_signal_4000

Final Thoughts

Is 20000 waveforms a large enough data set? Maybe not. I have generated tens of thousands more, but will not post until there is a good reason to do so. And that, my friends, is up to you!

That’s about it. I think that gives you enough information to ensure that you’ve interpreted the data and the labels correctly. What remains is experimentation, machine-learning or otherwise I suppose. Please get back to me and the readers of the CSP Blog with any interesting results using the Comments section of this post or the Challenge post.

For my analysis of a commonly used machine-learning modulation-recognition data set (RML), see the All BPSK Signals post.

Additional Batches of Signals:

Batch 6

Batch 7

Batch 8

Batch 9

Batch 10

Batch 11

Batch 12

Batch 13

Batch 14

Batch 15

Batch 16

Batch 17

Batch 18

Batch 19

Batch 20

Batch 21

Batch 22

Batch 23

Batch 24

Batch 25

Batch 26

Batch 27

Batch 28

Signal parameters text file

MATLAB’s SSCA: commP25ssca.m

In this short post, I describe some errors that are produced by MATLAB’s strip spectral correlation analyzer function commP25ssca.m. I don’t recommend that you use it; far better to create your own function.

Continue reading “MATLAB’s SSCA: commP25ssca.m”

How we Learned CSP

This post is just a blog post. Just some guy on the internet thinking out loud. If you have relevant thoughts or arguments you’d like to advance, please leave them in the Comments section at the end of the post.

How did we, as people not machines, learn to do cyclostationary signal processing? We’ve successfully applied it to many real-world problems, such as weak-signal detection, interference-tolerant detection, interference-tolerant time-delay estimation, modulation recognition, joint multiple-cochannel-signal modulation recognition (My Papers [25,26,28,38,43]), synchronization (The Literature [R7]), beamforming (The Literature [R102,R103]), direction-finding (The Literature [R104-R106]), detection of imminent mechanical failures (The Literature [R017-R109]), linear time-invariant system identification (The Literature [R110-R115]), and linear periodically time-variant filtering for cochannel signal separation (FRESH filtering) (My Papers [45], The Literature [R6]).

How did this come about? Is it even interesting to ask the question? Well, it is to me. I ask it because of the current hot topic in signal processing: machine learning. And in particular, machine learning applied to modulation recognition (see here and here). The machine learners want to capitalize on the success of machine learning applied to image recognition by directly applying the same sorts of image-recognition techniques to the problem of automatic type-recognition for human-made electromagnetic waves.

Continue reading “How we Learned CSP”

Useful Signal Processing Blogs or Websites?

Update November 1, 2018: A site called feedspot (blog.feedspot.com) contacted me to tell me I made their “Top 10 Digital Signal Processing Blogs, Websites & Newsletters in 2018” list. Weirdly, there are only eight blogs in the list. What’s most important for this post is the other signal processing blogs on the list. So check it out if you are looking for other sources of online signal processing information. Enjoy!  blog.feedspot.com/digital_signal_processing_blogs

***             ***             ***

Some of my CSP posts get a lot of comments asking for help, and that’s a good thing. I continue to try to help readers to help themselves. Throughout my posts, I link terms and methods to webpages that provide tutorial or advanced information, and most of the time that means wikipedia.

But I’d like to be able to refer readers to good websites that discuss related aspects of signal processing and communication signals, such as filtering, spectrum estimation, mathematical models, Fourier analysis, etc. I’ve had little success with the Google searches I’ve tried.

Continue reading “Useful Signal Processing Blogs or Websites?”

Comments on “Detection of Almost-Cyclostationarity: An Approach Based on a Multiple Hypothesis Test” by S. Horstmann et al

I recently came across the conference paper in the post title (The Literature [R101]). Let’s take a look.

The paper is concerned with “detect[ing] the presence of ACS signals with unknown cycle period.” In other words, blind cyclostationary-signal detection and cycle-frequency estimation. Of particular importance to the authors is the case in which the “period of cyclostationarity” is not equal to an integer number of samples. They seem to think this is a new and difficult problem. By my lights, it isn’t. But maybe I’m missing something. Let me know in the Comments.

Continue reading “Comments on “Detection of Almost-Cyclostationarity: An Approach Based on a Multiple Hypothesis Test” by S. Horstmann et al”

A Challenge for the Machine Learners

UPDATE

I’ve decided to post the data set I discuss here to the CSP Blog for all interested parties to use. See the new post on the Data Set. If you do use it, please let me and the CSP Blog readers know how you fared with your experiments in the Comments section of either post. Thanks!

Continue reading “A Challenge for the Machine Learners”

CSP Estimators: The FFT Accumulation Method

Let’s look at another spectral correlation function estimator: the FFT Accumulation Method (FAM). This estimator is in the time-smoothing category, is exhaustive in that it is designed to compute estimates of the spectral correlation function over its entire principal domain, and is efficient, so that it is a competitor to the Strip Spectral Correlation Analyzer (SSCA) method. I implemented my version of the FAM by using the paper by Roberts et al (The Literature [R4]). If you follow the equations closely, you can successfully implement the estimator from that paper. The tricky part, as with the SSCA, is correctly associating the outputs of the coded equations to their proper \displaystyle (f, \alpha) values.

Continue reading “CSP Estimators: The FFT Accumulation Method”

‘Can a Machine Learn the Fourier Transform?’ Redux, Plus Relevant Comments on a Machine-Learning Paper by M. Kulin et al.

I first considered whether a machine (neural network) could learn the (64-point, complex-valued)  Fourier transform in this post. I used MATLAB’s Neural Network Toolbox and I failed to get good learning results because I did not properly set the machine’s hyperparameters. A kind reader named Vito Dantona provided a comment to that original post that contained good hyperparameter selections, and I’m going to report the new results here in this post.

Since the Fourier transform is linear, the machine should be set up to do linear processing. It can’t just figure that out for itself. Once I used Vito’s suggested hyperparameters to force the machine to be linear, the results became much better:

Continue reading “‘Can a Machine Learn the Fourier Transform?’ Redux, Plus Relevant Comments on a Machine-Learning Paper by M. Kulin et al.”

Computational Costs for Spectral Correlation Estimators

Let’s look at the computational costs for spectral-correlation analysis using the three main estimators I’ve previously described on the CSP Blog: the frequency-smoothing method (FSM), the time-smoothing method (TSM), and the strip spectral correlation analyzer (SSCA).

We’ll see that the FSM and TSM are the low-cost options when estimating the spectral correlation function for a few cycle frequencies and that the SSCA is the low-cost option when estimating the spectral correlation function for many cycle frequencies. That is, the TSM and FSM are good options for directed analysis using prior information (values of cycle frequencies) and the SSCA is a good option for exhaustive blind analysis, for which there is no prior information available.

Continue reading “Computational Costs for Spectral Correlation Estimators”