In this post I introduce the spectral coherence function, or just coherence. It deserves its own post because the coherence is a useful detection statistic for blindly determining significant cycle frequencies of arbitrary data records. See the posts on the strip spectral correlation analyzer and the FFT accumulation method for examples.

Let’s start with reviewing the standard correlation coefficient defined for two random variables and as

where and are the mean values of and , and and are the standard deviations of and . That is,

So the correlation coefficient is the covariance between and divided by the geometric mean of the variances of and .

Now consider the spectral correlation function,

where the expectation operator can be either a time average or the ensemble average used in a stochastic-process formulation. For each finite , we can consider the quantity

which is the correlation between the two random variables and . To form the correlation coefficient, we need the variances for these two random variables,

and

Here we are assuming that the two variables have zero means, which is always true when there is no additive sine-wave component in the data with frequency .

For finite , then, we can form the correlation coefficient

Now, as , the numerator of converges to the spectral correlation function and

and

If the limit of the quotient exists, then the correlation coefficient is given by

We call this function the *spectral coherence*. A similar argument holds for the *conjugate spectral coherence*,

which is a normalization of the conjugate spectral correlation function. Since the coherence is a valid correlation coefficient, its magnitude will be less than or equal to one, and since the involved random variables are complex-valued, in general, so is the spectral correlation function and the coherence. Therefore, the coherence lies in the closed unit disk in the complex plane.

Here is an FSM-based estimate of the coherence for our rectangular-pulse BPSK signal:

The data-block length is samples and the frequency resolution (width of in the FSM) is set to (one percent of the sampling rate, which here is ) for both the numerator spectral correlation function and the denominator PSDs.

In practice, of course, the numerator and denominator of the coherence are estimates corresponding to finite-duration data blocks. The tricky part of estimating the coherence involves good selection of the estimator for the denominator PSDs . If the PSDs are not well resolved, or contain zeros, the coherence quotient can be numerically unstable or erroneous.

I cannot help asking you how to estiamte the denominator PSDs? I supposed that the FAM or SSCA based SCF estimates are obtained. Then I think FFT algorithm is a must to avoid computational complexity when estimating PSDs. I have tried a lot but I cannot present a detail solution to estimate PSDs furthermore.

So could you please give some advice?

Did you look at the discussion of coherence and PSD estimates in the FAM post? Also be sure to look through the comments of that post and the SSCA post; there is relevant material there too.

Hi again Chad,

I’m just wondering what your thoughts are on how to automatically decide whether a PSD is “well resolved” or not. I’m guessing that it depends on the application requirements, but do you have any rules of thumb?

If I’m analyzing recorded signals of various lengths, how do I decide if they’re too short or otherwise might lead to low-confidence analysis results? On the flip side, if I’m recording a signal to be analyzed, how do I decide between a long recording that uses all my storage space (just to be safe) or a shorter recording that would be sufficient?

Thanks for any thoughts!

Clint

I don’t think it will be possible to automatically decide whether a PSD estimate is well-resolved. You’d have to know the true PSD of the signals in the processed data in advance. But there are spectrum features that are inherently unresolvable, such as unit-step functions and additive sine-wave components. They possess features that have spectral widths of zero, and so any spectrum estimate that possesses a finite effective spectral resolution cannot resolve them. For a step-function-like spectrum (such as a square-root raised-cosine QAM signal with zero roll-off), the edges of the spectrum cannot be perfectly estimated. For a sine-wave component, the spectrum for the frequencies near the sine-wave frequency will inevitably reflect the tone’s energy, not just the noise or other continuous-spectrum signal components that lie there.

You might consider building into a power spectrum estimator the ability to detect spectral edges or corners or the like, perhaps using a derivative or similar. Derivatives are tricky when dealing with noisy or random signals though. If you could detect such occurrences in your spectrum, you could then discount high coherence estimates that involve the spectral components near the detected edges/corners.

For a noiseless random signal with cycle frequency , you’ll want to process a data block with length that is many multiples of to ensure you can see that periodicity. For a noisy signal, though, you also need to process enough data to overcome the effects of noise on the spectral correlation estimate. So there is no easy answer. I capture many wideband signals and RF scenes, and am supplied with such by others as well, and I have plenty of storage, since it is cheap and readily available. So I generally try to store long scenes, where the length in seconds greatly exceeds the reciprocal of the smallest non-zero cycle frequency of interest. That turns out to be a small number of seconds, typically.

Hi Chad!

I’m trying to implement the spectral coherence for the task that estimate the symbol rate for low roll-off factor signal.

If the pulse shape is bandlimit(e.g. SRRC with roll-off=0 and bandwith B). There is 0/0 problem when computing the coherence function. Especially for in

What do you do to avoid 0/0? For inband singularity point, I just filter out this NaN points to 0. For outband zeros,I padded with small value(1e-10). But this will amplify candidate cycle frequency ,since the when

It’s the first time I use latex comment. The first equation should be “Especially for in

The coherence is undefined whenever the denominator is zero, which can happen only when you process a noiseless signal. Are you sure you want to process noiseless signals?

/quote Here we are assuming that the two variables have zero means, which is true when there is no additive sine-wave component in the data with frequency

What are your thoughts for when you do have additive sine wave components (e.g. pilot tones)? Of course they can make it easier to detect the signal, but it seems that they would confound the detection of cycle frequencies.

I seem to remember reading somewhere that it may be desirable to remove the additive sinusoid components from the PSD (by filtering the computed PSD?), but I’m not seeing it now.

Thanks!

I don’t think it is a good idea to process data containing finite-strength additive sine-wave components with a CSP estimator such as the SSCA, FAM, FSM, etc. Detecting them and removing them will help avoid false-alarms due to the unresolvability of the spectrum when the signal contains sine-wave components. There is no advantage to such CSP processing anyway–if you can detect the sine-wave, you can estimate its parameters, and nothing that the SCF or cyclic cumulants will tell you about the sine-wave is more information than that. I probably said as much, as you suggest, in other comments or posts, but I can’t find one at the moment.

You’ll need to do more than process the PSD. You should try to detect the sine-wave components and remove them from the data. There are many ways to detect sine waves, but a good non-parametric way is to just use the Fourier transform and some kind of thresholding. But that’s not really CSP, so I don’t talk about it much on the Blog. It would be a reasonable topic for a Signal Processing ToolKit post, though.

Thanks for sharing your CSP knowledge, Chad. It seems I’m overlooking an important detail or two in this topic. First, must expected values be taken at any point?

Second, in eqn 13 & 14 denominators, shifting by $\alpha/2$ and zero filling guarantees div by 0. Currently, I do not take expected values and simply divide the bpsk signal’s SCF by shifted PSD’s and replace div by 0 in result with 0.

My non-conj spectral coherance graph looks like mostly like yours but dosn’t roll off nearly as much as alpha approaches 1. At $\alpha=1$, your non-conj spectral coherance value seems about 0.4. Mine is about 0.8. (I don’t see a way to attach pictures or would do that.) Thanks for any pointers to my misunderstandings!

[Added by Chad]

Thanks for checking out the CSP Blog, Mike, and the comments!

No. Expectations can only be taken when there is an ensemble. This is almost always a fictitious mathematical ensemble. In rare cases one might have an actual ensemble at hand. The CSP Blog is about signal processing, and so any averaging we do in the context of estimation is time (e.g., TSM) or frequency (e.g., FSM) averaging. Here we are concerned with estimating the spectral correlation function, coherence function, cyclic autocorrelation function, cyclic cumulants, etc., using only a single sampled-data signal (“time-series”).

Yes, indeed. In DSP, we only have the Fourier transform over the normalized frequency interval , and so we just have to live with the fact that has increasingly small overlap as increases, and similarly for a PSD estimate. And the dividing-by-zero problem is even worse than that, for it is perfectly possible for a PSD to have zeros. So you must avoid outputting a non-zero coherence estimate for any pair for which the coherence denominator is zero.

That explains the appearance of my coherence plot in the post. As the cycle frequency increases in magnitude, the overlap between the two shifted transforms decreases. In continuous-time mathematical analysis, this doesn’t happen. The coherence is perfectly well defined for all and , except for the case where the denominator PSDs have a zero.

As for why your coherence is not decaying as rapidly as mine, what are the signal and noise power levels?

Thanks, Chad, for your helpful reply, and seems I’m in better shape than I thought! I converted your BPSK sample code to python as a starting point. Barring bugs (!) I create the BPSK signal of random 1/-1 (duplicated to samples/sym), mix to carrier, then add noise to achieve 15 dB s/n.

And I now see if I drop to 10 dB s/n it falls much more inline with yours! I’ll get a grasp of the basics yet. 🙂

As a short follow up to my questions, for my results described in the previous note I used the SCF FSM subroutine that I used to generate correct looking results for previous blog lessons.

Dear Chad,

I am trying to use the SCohF to blindly detect the symbol rate of a set of signals.

I am using here the code from J. Antoni — I do have my code for the symbol rate estimation and I have read your critics on the algorithm, but I wanted to be sure that what I observe is not due to a bug in my code (yeah, pretty low self confidence ;)).

The window size for the STFT is 16 -> frequency resolution = 18.75 kHz in the example below.

For clean signals it seems to work pretty well, take for instance the SCohF for a 4-FSK signal (20ksym/s, sampling rate = 200 ksps, generated with a signal generator, SNR = 60 dB):

https://ibb.co/1R11CXr

In the picture I show, for each alpha value on the X-axis, the maximum of the mean of the absolute values returned by Antoni’s code (from the ‘S’ matrix in his code).

The peak is crystal clear, and gives the correct result.

Given the properties of CSP techniques, I would expect the SCohF to be pretty insensitive to AWGN, therefore I’ve taken the signal and lowered the SNR to 10 dB (using Matlab’s awgn() function, so there should be no issues there). In some of your posts you’ve worked with far, far worse SNRs, e.g., I recall a ROC curve where you do signal detection at -11 dB.

However, running again the code with the same parameters on the corrupted signal, this is what I obtain:

https://ibb.co/xLD9n8w

All the peaks are gone… 😦

Am I missing anything?

Thanks in advance for your help!

Best,

Rob

The dominant peak is at 20 kHz, which is good, but the maximum coherence is tiny. This implies that either (1) the SNR is not 60 dB, (2) something is going wrong in your estimator, or (3) the particular kind of FSK signal you are using has very weak cyclostationarity. There are many varieties of FSK, and their statistical properties depend on how the symbol-clock phase and carrier phase are related, and vary, over time. I suggest you use a simpler, more well-understood signal, such as my usual rectangular-pulse BPSK signal (and set its power to unity [0 dB]). That way you have lots of solid reference points to compare to what comes out of your code.

Since the peaks have completely disappeared, I would check this step. What was the exact command you used? Did you look at a PSD?

Dear Chad,

Thanks for the quick reply!

(1) I haven’t recorded the data myself, I have taken this value from the metadata that came with the signal dataset.

(2) the estimator is the one from Antoni’s paper. I simply take his code, load my signal, and run it as:

[S, alpha, ~, ~] = Fast_SC(x, Nw, alpha_max, Fs, opt);

where:

– x = my signal

– Nw = 16

– alpha_max = 100e6

– Fs = 200e6

– opt.coh = 1 (to compute the SCohF)

Then I plot mean(abs(S(:, 2 : end)))

My code does the same, but in a non-optimized way and on selected alpha values (to reduce the computational load), and the result is virtually the same (in the AWGN case, I get garbage out).

(3) this might be the answer. In my dataset I do have multiple xFSK signals, and everything’s nice for all of them except this one… (I do few other mistakes, but these are due to the fact that I pick the wrong peak — either an harmonic or a sub-harmonic –, thus it is a mistake in my detection algorithm and not an issue with the CSP output)

(4) the signal actually disappeared from the PSD…

https://ibb.co/bbXjB5B

which is weird. The SNR reduction is simply made by invoking the Matlab function:

x = awgn(x, 10, ‘measured’);

Thanks again for your help and have a nice day!

Rob

Good Morning Chad,

I have a question I may not be able to act on as I move to a different job after next week. However, maybe it will interest others (including my current coworkers). This article seems like a reasonable place for the question.

I appreciate the approach (e.g. Eq. 1, above) of respectively connecting the spectral

correlationandcoherencefunctions to the more familiar (to me at least) concepts of off-diagonal elements in acovariancematrix and their underlyingcorrelationcoefficients. As we’ve been implementing the SSCA, I’ve wondered whether anyone has taken that connection farther by treating SSCA point estimates as elements of a matrix. It’s only a vague idea in the back of my CSP-novice mind, but linear algebra and probability theory have a great relationship (e.g. Kalman filtering, or factor graphs). Maybe the machinery of matrix manipulation would lead to some efficient and/or insightful approaches to interpreting SSCA outputs.Are there any research threads along these lines?

I’m not familiar with any published research along those lines. But I have been working, slowly, with others on the idea of treating the output of the SSCA as an image, and then trying to apply proven machine-learning image-recognition tools (CNNs) to do multiple-signal modulation recognition.

The SSCA output is pretty huge if you don’t do (1) some kind of thresholding and (2) cycle-frequency binning or equivalent cluster analysis.

So when you process a short record of, say, 16384 samples with an SSCA having, say, 32 strips, you get a matrix of point estimates that has 16384*32 = 524288 elements (two of these, one for the non-conjugate and one for the conjugate). There are only 32 frequencies but many more (16384) cycle frequencies, so we end up with a skinny sparse matrix I guess. I suppose some kind of sophisticated sparse-matrix tool might replace a straightforward coherence-based thresholding operation. But in any case, I strongly believe that the handful of significant cycle frequencies (said handful arrived at through thresholding and binning, for example) are the gold nuggets in the full-point-estimate pan. I would think a CNN operating on the full SSCA output would somehow focus in on those nuggets as well, but I have no good results yet.

I like the idea of treating the SSCA’s output as an image. It seems well motivated based on our limited efforts thus far to process collected data through the SSCA. A heat map, of sorts, looking down on the full region of support has an image-like feel to it.

Without question, the SSCA output is gargantuan. However, the sparsity of a matrix representation gives me hope that it may still produce useful results in reasonable time. I find there are only unique cycle frequency values among all the strips, where and is an integer. I get the same number of unique spectral frequencies. As the total number of points grows, quickly becomes a pittance of the total (only about 6.2% using your above values of and ). A square, matrix would be truly massive, but it just ‘feels’ like the sparsity and structured nature of the relationship between the frequency values indexing the strips could lead to something useful.

I’m too far removed from linear algebra proficiency and far too new to cyclostationarity to go beyond this sparse description of my intuition. There is probably a host of very good reasons nobody conducts research along these lines. Thanks for humoring me.

Thank you for a correct characterization of the numbers of unique cycle frequencies and spectral frequencies that are output by the SSCA! My comment above is incorrect–I must have been mixing up TSM ideas with SSCA ideas or just plain mixed up. I hope your new position doesn’t prevent further commenting and, crucially, error-detection and correction.

I’ll return to this comment thread if I come across any serious attempts at treating the SSCA output (or FAM or whatever) as a matrix that can be successfully analyzed by linear algebraic means to some good signal-processing end.

I appreciate the sentiments; the blog has taught me a lot. While I’m not aware of a CSP connection to my new job, ya never know. Even history tends to be cyclic.

As a maybe-useful footnote to this exchange, I reworked the indexing and notation for coordinates in the bi-frequency plane to be easier on my brain. Assuming my WordPress LaTeX doesn’t fail me, I prefer and , where , , , and . For starters, zero-based indices make me happy. It’s easier for me to see and say that the sample of the strip has a cycle frequency of . I also think of as a unit conversion factor; I look at each equation’s units as Hz = (Hz/Sample)*Sample. Packing all the sample variables together relates to this vague sense that linear algebra might be lurking in the background, waiting to provide some insight.

Hoping for more CSP fun in the future,

John

Hello dr. Spooner, Im trying to compute spectral coherence function from already functional spectral correlation function just by normalizing it as you menitioned in this post. So for this purpouse I compute Salpha./sqrt(Salphaplus.*Sminus) where Salpha is SCF. The question is when Im computing SCF for specific alpha should I already apply division with sqrt(Salphaplus.*Sminus). I tried almost everything but I not able to get same results as you. Thank you for your answear and your patience. Have a nice day.

Here is my code:

clc;

clear all

close all

%%%%%%%%%%%%

M = 2;

Fc = 200;

Fs = Fc*10;

n_bit = 4000;

Fb = 50;

Td = n_bit/Fb; % Time duration, sec

spb = Fs/Fb;

fSpan = 2;

N_psd = 1024;

%%%%%%%%%%%%

data = randi([0 M-1],n_bit,1);

dmodData = (pskmod(data,M));

dmodData = awgn(dmodData,9,’measured’);

txFilter = comm.RaisedCosineTransmitFilter(“Shape”,”Square root”,”RolloffFactor”,0.3,”FilterSpanInSymbols”,fSpan,”OutputSamplesPerSymbol”,spb);

txfilterOut = txFilter(dmodData)’;

time = 0:1/Fs:Td-1/Fs; %

freq = 0:1/(Td-1/Fs):Fs; %

carrier = exp(1j*2*pi*Fc.*time); %

txfilterOut = txfilterOut.*carrier; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

num_blocks = floor(length(txfilterOut) / N_psd);

alpha = 0:0.005:0.5;

for i = 1:length(alpha)

txfilterOutte = txfilterOut(1, 1:(N_psd*num_blocks));

modulplu = txfilterOutte.*exp(1j*2*pi*(Fs+alpha(1,i))/2.*[1:(N_psd*num_blocks)]); %

modulminu = txfilterOutte.*exp(1j*2*pi*(Fs-alpha(1,i))/2.*[1:(N_psd*num_blocks)]); %

modulplu = reshape(modulplu, N_psd, num_blocks);

modulminu= reshape(modulminu, N_psd, num_blocks);

x1 = fft(modulplu);

x2 = fft(modulminu);

I = x1 .* conj(x2);

Isum = sum(I.’);

Ipsum = x1.^2;

Imsum = (x2).^2;

Ipsum = sum(Ipsum.’);

Imsum = sum(Imsum.’);

Iabsft(i,:) = fftshift(Isum./sqrt(Ipsum.*Imsum));

end

freq_c = ([0:(N_psd-1)]/N_psd – 0.5)*Fs;

[alko,tauco] = meshgrid(alpha,freq_c);

figure(4)

surf(tauco,alko,(abs(Iabsft))’)

shading flat

xlabel(‘f’)

ylabel(‘\alpha’)

zlabel(‘S(f)\alpha [dB]’)

title(‘SCohF’)

time = 0:1/Fs:Td-1/Fs; %

freq = 0:1/(Td-1/Fs):Fs; %

carrier = exp(1j*2*pi*Fc.*time); %

txfilterOut = txfilterOut.*carrier; %

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

num_blocks = floor(length(txfilterOut) / N_psd);

alpha = 0:0.005:0.5;

for i = 1:length(alpha)

txfilterOutte = txfilterOut(1, 1:(N_psd*num_blocks));

modulplu = txfilterOutte.*exp(1j*2*pi*(Fs+alpha(1,i))/2.*[1:(N_psd*num_blocks)]); %

modulminu = txfilterOutte.*exp(1j*2*pi*(Fs-alpha(1,i))/2.*[1:(N_psd*num_blocks)]); %

modulplu = reshape(modulplu, N_psd, num_blocks);

modulminu= reshape(modulminu, N_psd, num_blocks);

x1 = fft(modulplu);

x2 = fft(modulminu);

I = x1 .* conj(x2);

Isum = sum(I.');

Ipsum = x1.^2;

Imsum = (x2).^2;

Ipsum = sum(Ipsum.');

Imsum = sum(Imsum.');

`Iabsft(i,:) = fftshift(Isum./sqrt(Ipsum.*Imsum));`

end

There is a lot of confusion for me in this code because of the way that physical and normalized times and frequencies are mixed together and incorrectly defined. I suggest the following. Try to do all of the estimator code development using a sampling rate of one. Once you get that working correctly, you can introduce modifications for the case of a non-unity sampling rate. Those modifications should not change the calculations that you do, but instead

scalethe times, frequencies, and cycle frequencies that you are dealing with at the conclusion of the processing. That is, no matter the physical sampling rate, you can always do your signal processing on samples assuming that the sampling rate is one. You have to take into account the physical situation at the beginning, when you choose the data-block length to process, the number of strips in the SSCA, etc., and at the end, when you want to report the cycle frequencies or plot an SCF. Agree?Moreover, you don’t want to mix things like the normalized cycle frequencies in your loop over i with physical sampling rates like Fs:

modulplu = txfilterOutte.*exp(1j*2*pi*(Fs+alpha(1,i))/2.*[1:(N_psd*num_blocks)]);

Here you are trying to frequency shift the data up by , but alpha here is normalized and Fs = 2000. So you are adding 2000 to tiny numbers like 0.01. You’re introducing Fs in the argument of the exponential, but then you’re also using normalized times.

When you get to the coherence calculation, I see you are trying to estimate the two required PSD values, but you are doing that by averaging the square of the FFT of x1 and of x2. Those squares are complex-valued!

I would be a lot more careful about the quantities I’m trying to estimate/calculate and a lot less concerned, at this point, with doing things efficiently with matrices. Your use of matrix math to try to get at, essentially, the time-smoothing method, is obscuring what you are doing, so it is harder for you to see errors.

Another thing you have to watch out for is when you apply your time-domain frequency shifts, those are circular shifts. So you will have wrap-around edge effects for larger alpha.