# Infinity, Periodicity, and Frequency: Comments on a Recent Signal-Processing Perspectives Paper ([R195])

If a tool isn’t appropriate for your problem, don’t blame the tool. Find another one.

Let’s take a look at a recent perspectives-style paper published in the IEEE Signal Processing Magazine called “On the Concept of Frequency in Signal Processing: A Discussion [Perspectives],” (The Literature [R195]). While I criticize the paper directly, I’m hoping to use this post to provide my own perspective, and perhaps a bit of a tutorial, on the interrelated concepts of frequency, infinity, sine waves, and signal representations.

I appreciate tutorial papers in the signal-processing literature (see, for example, my post on Candan’s article about the Dirac delta [impulse] function), because my jaundiced view of the field is such that I think the basics, both of mathematics and communication-related signal-processing, are neglected in favor of fawning over the research flavor of the month. Over time, everybody–students, researchers, professors–is diminished because of this lack of attention to foundations.

### Prelude

Before we can grapple with the authors’ complaints about the inadequacies of the frequency concept in signal processing, we might just refresh ourselves on the basics of frequency–what is it and where does it come from mathematically?

The overwhelmingly common interpretation of frequency is that it measures the periodic oscillations of a sine wave. But, then, what is a sine wave? Here at the CSP Blog, now eight years old, we have made heavy use of sine waves, also called sinusoids or in the case of complex numbers, complex exponentials. It feels awkward to have to reach this far back after intertwining CSP concepts, methods, and results with sinusoids for so many posts and published papers, but then again, Soto-Bajo’s paper is new. So I’m just now realizing that I probably should have built sinusoids from the ground up early on in the Signal Processing ToolKit series. I’ll rectify that now.

A sine wave is a function of time or space that has, as we’ll see, a strong connection to the trigonometric function known as sine. Closely related functions are cosine and tangent. We’ll build them up from basic planar (Euclidean) geometry here in order to define the sine wave itself.

#### Trigonometric Functions

Consider the unit circle in the $(x,y)$ plane, as in Figure 1. This circle is the set of points that satisfy the relation

$\displaystyle x^2 + y^2 = 1. \hfill (1)$

Looking at a point $P = (x_p, y_p)$ on the circle, I can connect that point to the origin with a straight line called H, and I can connect $P$ to the point $(x_p, 0)$ with the line $O$. Note that $O$ is at right angles to the x-axis. Finally, I can connect the origin to the point $(x_p, 0)$ using the line $A$.

We have thereby created the right triangle with sides $H, A,$ and $O$. This can be done for any point on the circle. The angle $\theta$ is of primary interest here–it is subtended by $H$ and $A$.

The trigonometric (ancient Greek words for ‘triangle’ and ‘measure’ fused) functions are defined using the $HAO$ triangle. They are the ratios of the lengths of $H, A,$ and $O$, which ratios will vary as $P$ varies, which in turn causes $\theta$ to vary.

We can express the angle $\theta$ in degrees, which is common, or in radians. Considering some circle with radius $R$, an angle of one radian subtends an arc length on that circle of $R$.

Since we know that the circumference of that circle is $2\pi R$, it follows that $2\pi$ radians is equal to 360 degrees. That is, an angle of $2\pi$ rad (360 degrees) subtends an arc length on the circle equal to the arc length of the entire circle. We’ll continue with radians as we build up sinusoids from scratch, but eventually we will shift our focus away from the angles and toward frequency, getting back to our beloved Hertz at the end.

Referring to Figure 1, the trigonometric functions are nothing more than the following ratios of the sides of the defined right triangle:

$\displaystyle \sin(\theta) = \frac{O}{H} \hfill (2)$

$\displaystyle \cos(\theta) = \frac{A}{H} \hfill (3)$

$\displaystyle \tan(\theta) = \frac{O}{A} \hfill (4)$

Since $H$ can never have zero length, $\sin$ and $\cos$ are well-defined and well-behaved functions (no so $\tan)$. Definitions (2)-(4) hold for any angle of a right triangle, but for the right triangle defined in Figure 1, the hypotenuse $H$ always has unit length, so in that case the sine is just $O$ and the cosine is just $A$.

So that’s the definition of the sine function. You can see how it is intimately related to any physical process that can be characterized as moving in a circle or oscillating as a function of some angle or other. The definition doesn’t, however, give a clear picture of the graph of the function $\sin(\theta)$. So I made a video that shows how the triangle and $\theta$ evolve with the location of $P$, and how $\sin(\theta)$ evolves accordingly–see Video 1.

#### Sine Waves: Trigonometric Signals

Now we have the function $\sin(\theta)$. We know from physics that entities like electromagnetic waves (inching closer to communication signals now) result from the solution of differential equations and these waves involve the trigonometric functions in both time and space. As usual, our concern here at the CSP Blog are functions of time (signals). So to arrive at the sinusoid, we have to introduce time somehow.

To introduce time, thereby connecting the sine function to the sine wave, let’s imagine again that point P on the unit circle as it moves, with constant angular speed, from $(1,0)$ to $(0,1)$ to $(-1,0)$ to $(0, -1)$. Only now we ask how fast is that point moving?

Let’s suppose it takes one second. Then the time interval $[0, 1]$ maps to the angle $\theta$ by

$\displaystyle \theta(t) = 2\pi t, \ \ \ t \in [0, 1]. \hfill (5)$

Our sine function, now as a function of time, is given by

$\displaystyle \sin(\theta(t)) = \sin(2\pi t) \ \ \ t \in [0, 1]. \hfill (6)$

Imagine traversing the unit circle indefinitely in this way, one second per traversal, or period, so that we map the entire real line $-\infty < t < \infty$ to radians. We can easily do this because the $\sin(\theta)$ function is periodic. The points $P$ and $P + K2\pi$ are the same point, and therefore the right triangle is the same, and therefore the sine value is the same since it depends solely on that right triangle. That is

$\sin(\theta + K2\pi) = \sin(\theta) \hfill (7)$

for all integers $K$, and so the sine function is periodic with period $2\pi$.

Our first sine wave is then

$x(t) = \sin(2\pi t), \ \ \ \forall t \hfill (8)$

Now we can consider what happens to the sine wave when we speed up or slow down our traversals around the unit circle. The first sine wave (8) corresponds to one second per traversal. If we want to change that to $T > 0$ seconds per traversal we need to modify the mapping (5) to

$\displaystyle \theta(t) = 2\pi t/T \hfill (9)$

which we can check by noting that $\theta(t)$ runs from 0 to $2\pi$ over the time interval $[0, T]$. Now we have the more general sine wave

$\displaystyle x(t) = \sin(2\pi t/T) \hfill (10)$

Since $\sin(\theta)$ is periodic with period $2\pi$, $\sin(2\pi t/T) = \sin(2\pi(t + KT)/T)$, which means $\sin(2\pi t/T)$ is periodic with period $T$. If $T = 1$, then the period of the sine-wave function is one.

#### Enter Frequency At Long Last

The frequency of the sine wave is simply the number of periods of the periodic sine-wave signal that span one second. By construction, the frequency of the sine wave (10) is the reciprocal of the period,

$\displaystyle f = 1/T. \hfill (11)$

The period measures the number of seconds per cycle (unit-circle transversal) and the frequency, therefore, measures the number cycles per second. This is why the units of frequency are reciprocal seconds, $\mbox{\rm s}^{-1}$, which in the SI unit system we rename as Hertz. That’s what Soto-Bajo et al are gonna reflect on. Just wanted to get the ‘sine-wave’ part straight.

#### Infinity

Implicit in the development above (which is mine, so go ahead and throw rocks at it here at the CSPB rather than bugging some innocent geometer) is an infinity or two. When we drew the unit circle, created our right triangles, and defined the trigonometric functions, we steered clear of any mathematical difficulties like infinities or dividing by zero (except for that glancing reference to the $\tan(\theta)$ function). Once I said “imagine traversing the circle indefinitely” I brought smuggled in an infinity. Two, really, because we traverse the circle indefinitely far into the past AND indefinitely far into the future. And once we declared that $\sin(2\pi t)$ is a periodic function, then we are admitting the function is defined for all time, which is infinite.

So the very notion of the sine wave involves infinities: there aren’t $f$ cycles in some one-second stretches of time, there are $f$ cycles in each and every one-second interval of time.

#### Connection to Fourier

The complex version of the sine wave is the complex exponential

$\displaystyle s(t) = e^{i 2 \pi f t} = \cos(2\pi f t) + i\sin(2\pi f t) \hfill (12)$

and we’ve made great use of these complex sine waves in developing the Fourier series and Fourier transform in the Signal Processing ToolKit series. In both cases infinite numbers of infinite-duration sine waves are used to represent (write down an expression for) arbitrary signals.

With the Fourier series, we can represent a signal on a finite interval, say $[0, T]$ (but the particular interval doesn’t matter), as the possibly infinite sum of windowed complex sine waves,

$\displaystyle x(t) = \sum_{k=-\infty}^\infty c_k e^{i2 \pi (k/T) t}, \ \ \ t \in [0, T]. \hfill (13)$

I say “windowed” here because the restriction on $t$ means we don’t need access to the sine waves $e^{i 2 \pi (k/T)t}$ for all $t$, just those values of $t$ within the representation interval. On the other hand, in the special case where the signal to be represented is in fact periodic, the representation is valid for all $t$, and therefore the signal is equal to a possibly infinite sum of infinite-duration sine waves.

With the Fourier transform, we analyze any integrable signal (all practical signals for instance) into an infinite collection of infinite-duration sine waves with infinitesimal amplitudes (leaving aside the special case of the Fourier transform of a periodic signal, such as ${\cal{F}}\left[e^{i 2 \pi f_0 t}\right] = \delta(f-f_0)$ ),

$\displaystyle x(t) = \int_{-\infty}^\infty e^{i2\pi f t} \left(X(f) df\right) \hfill (14)$

So when we go to apply Fourier-based tools to finite segments of real observed data, we have to be aware of the limitations of the tools–they are steeped in infinities and infinite-duration sine waves. This means that if we analyze, say, a windowed sine wave (a chunk of some sine wave in time), we shouldn’t expect to see an impulse function (transform) or one non-zero coefficient (series) and, moreover, we should not declare the tools incoherent if those expectations are not met. The rest of this post is essentially an elaboration on that idea.

### The Paper

Early on, in the introduction, we encounter this:

In this regard, it is interesting to
retrieve the discussion in [11] about the
modeling of “real-world” signals. Con-
cepts such as frequency, time limited,
or band limited are helpful mathemati-
cal abstractions, but they are actually
meaningless in practice.
What do we exactly mean/under-
stand by “frequency”? How is it relat-
ed to oscillations? Is it appropriate for
answering the key questions that arise in
any discipline?

[R195]

‘Shots fired!’ as the kids say these days. Meaningless? Really? Or maybe, just maybe, the concepts are approximately true for data, more accurate in some signal-processing situations, less in others? I guess that kind of Perspective might not be worth publishing though.

Now I know that Soto-Bajo et al are biomedical researchers, and they definitely focus on EEG signals in the paper and in their work. I have significant experience with processing EEG signals as well (My Papers [36]). Yes, those signals are messy and don’t conform to chunks of exactly periodic signals. But I assert that the data from all disciplines are messy. That’s the nature of nature, and it holds even for the data in my primary field, which consists of signals created by, and captured by, manmade electronic devices. However, those signals are still perfectly natural electromagnetic waves–they are as much a part of the natural world as human speech. That is, fully a part thereof. So when I see the question about ‘any discipline’ I think, well, yeah, frequency is extremely valuable for answering fundamental questions and creating highly useful and accurate data models (oh, just off the top of my head, how about cyclostationary signals?) in my discipline. And my discipline has “signal processing” right in the name, so it should be included in Soto-Bajo’s Perspective (review their title). But it isn’t, and this is my core problem with this paper. The parochial and myopic Perspective of the authors is incorrectly inflated to cover all signal processing, and that is a fundamental error. The concept of frequency, the formalism of sine waves, and the Fourier-based tools are absolutely indispensable in all of my signal-processing work. And probably yours too, dear reader.

Also, how can concepts be simultaneously ‘helpful’ and ‘meaningless?’ Just ponder that one for a minute or two.

Brain rhythms are characterized by
frequency and amplitude ranges. They
are oscillatory signals with a close-to-
uniform oscillation rate, which fluctu-
ates within a specific range. However,
there is not a consensus about the limits
of frequency bands (see Figure 1).

This is not a rigorous definition, but
a “naive” description. Despite being
central, the use of frequency is merely
intuitive.

[R195]

What could “merely intuitive” mean here? From the Fourier series development, we know that a signal on an interval with length $T$ can be represented (written as) a sum of weighted windowed sine waves whose frequencies are harmonics of the fundamental frequency $1/T$. That is rigorous. It is not intuitive. It is universally applicable–not just to EEG signals, but all manner of physical signals. Why do the authors push so hard to claim that a mathematical tool like the Fourier series is just some whimsical thought? I believe it is because they are simply unsatisfied with its operation on their data. But that is no good reason to denigrate the tool; it is a reason to reach for a different tool (I suggest wavelets here, which is what I applied to EEG data to good effect.)

Let’s take a look at Figure 1. I would expect a Perspectives paper to make great use of some excellent figures to really drive home the Perspective. The caption says that the upper plot is an EEG signal. Then the other plots are “two possible decompositions into brain rhythms.” Well, there are three plots after that EEG plot. In the second one down, the y-axis is labeld $\delta, \theta$ and yet there are four traces in the plot. What are they? How do they relate to the denigrated concept of frequency? We aren’t told. They look pretty good to me. What is wrong with them? Why are we to be suspicious of this “decomposition?” No one knows.

The spectral representation
of the signal is another function that
relates to each frequency the amplitude
corresponding to the basic oscillation
at this frequency, that is, the strength
with which this harmonic takes part
in the composition of the signal (see
Figure 2).

[R195]

So let’s take a look at Figure 2. We see the same EEG signal as in Figure 1, followed by something called a “Spectral Amplitude.” This middle plot should have been on a logarithmic scale, because you can’t see much of the function. But what is the point? Is the “Spectral Amplitude” wrong in some alarming sense? What are we to make of this? Are things good or bad? And what the \$%! is a Spectral Amplitude?

The Fourier series theory deals
with periodic signals and provides an
effective way of analyzing and syn-
thesizing them.

[R195]

Well, the Fourier series is a way to represent (write down; express) any function on a finite interval as a weighted sum of complex sine waves. If the signal happens to be periodic, then the representation is valid for all time, if not it is valid for the considered interval. So the theory does not “deal with periodic signals.”

The authors then go on to complain about the fact that if you have a windowed sine wave (that is, a sine wave that is defined only on some finite interval, and the function is zero outside of that interval), then the Fourier series or Fourier transform for that windowed sine wave is not a single number (series) or impulse (transform) (Eq (5))

“leaves a spurious trail …” I don’t think so. Look, if you have a sine wave on some interval $[t_1, t_2]$, $T = t_2-t_1$, and the frequency of that sine wave is not equal to a harmonic of $1/T$, then you should NOT expect that your Fourier series will have a single non-zero coefficient! (What I interpret “condense” to mean above.) The only time you should expect that (and they use “expect” above) is when the sine wave on the interval is equal to one of the Fourier-series basis (building-block) functions. In that case, the sine wave on the interval will be orthogonal to all Fourier sine waves except one. Otherwise, it won’t be, and in general you’ll need an infinite number of harmonics to properly represent the tone. This just comes from a basic understanding of orthogonal representations.

And “Fourier series scatters spectrum energy, giving rise to a completely spurious spectrum with no physical meaning at all (see Figure 3)” OMG! They aren’t mildly peeved here, they are shouting from the rooftops that the Fourier analysis is an impostor! It has no meaning! Fully spurious! What could all of us signal processors possibly be thinking using this tool for all these decades? Fools, all of us, to spew forth so much spurious spectra.

Alright, let me wipe the spittle from my lips, take a breath, and try to continue in a measured, reasonable voice.

So let’s look at Figure 3, then. It should hit home their idea that the Fourier spectrum (series or transform) has NO PHYSICAL MEANING. Now the signal is a square wave (blue line in the top plot) with period equal to one second. So we know that it can be represented by a sum of weighted sine waves where the frequencies are harmonics of 1 Hz. So, physically, we expect to see energy in the spectrum concentrated around 1, 2, 3, … Hz. [I don’t know why there are three curves in the second plot and only two signals to be analyzed.] But, yeah, that’s exactly what we see in the plot–energy concentrated at the expected harmonics. You can go back and review the Fourier series for various square waves (remember those odd and even square waves we looked at?) to see when the odd harmonics might be zero, and when the even ones might be zero.

How’s this for a stab at the physical meaning of the non-zero Fourier components of a square wave: I can generate the square wave in gnuradio-companion, transmit it out the transmit port of a physical SDR and route that to a bank of physical narrowband bandpass filters with center frequencies at odd multiples of one Hz, using several physical metal cables. Adding the outputs of these parallel bandpass filters, I will get a high-fidelity copy of the original square wave. The square wave is composed of nothing more or less than its harmonically related sine-wave components.

At the end of the discussion in (my) Figure 5, we see this reference to the authors’ Figure 4 (my Figure 6):

Furthermore, a recurrent com-
pound waveform could have an intrin-
sic “frequency” related to that of its
components to a greater or lesser extent
(see Figure 4).

[R195]

The idea is that the authors add a low-level sinewave to a square wave and look at the “Spectral Amplitude” of the result in their Figure 4. The added sine wave has frequency 6 and the square wave has period 1. We’ve already established that the spectral lines for the square wave occur at odd harmonics of the period, so 1, 3, 5, … Hz (see my Figure 6). So if the “spectral amplitude” has any physical meaning, we should expect to see new energy at a frequency of 6 Hz, and indeed we do. So what is the problem? (However, once again the authors’ figures leave a lot to be desired. The blue line in the second plot down is barely distinguishable, and there are three lines not two, which is mysterious to me.)

The authors often refer to kinds of frequencies: “consonant,” “dissonant,” and “discordant.” Hard to say exactly what they are. But I get the impression that the authors don’t really like any of them.

But, why should we worry about
dissonant frequencies? Does it make
sense at all to distinguish between real
frequencies and spurious ones? How
to define independent or cooperative
components and how to classify and/or
cluster them? If one is only interested in
decomposing and synthesizing signals,
there is no major problem. But if the pur-
pose is to get meaningful components,
explanatory of the underlying constitu-
tive mechanisms of the phenomenon, the
previous concerns are justified.

Discordant frequencies are not
superfluous at all. In the EEG case, they
can occur naturally, as the superposition
of independent, perhaps asynchronous
and uncoordinated, but concurrent,
contributor oscillations. Orthogonality
enables energy additivity, but it does
not necessarily mean independence or
uncorrelation. Frequency analysis can
definitely distort the phenomenological
nature of the signal. It is mathematically
correct, but misinformative.

[R195]

This last quote is gibberish to me. “Orthogonality … does not necessarily mean … uncorrelation.” “cooperative components” “Discordant frequencies are not superfluous.” What are the good doctors trying to say? “Fourier analysis sometimes gives me results I don’t like,” maybe. “Fourier analysis sometimes gives me results I can’t understand,” probably.

At first sight it seems reasonable
that sudden jumps produce respective
manifestations at high frequencies.
But consider the square signal given
by sign (sin (2 pi t)) . How would you
describe its periodicity, its oscillation,
in terms of frequency? How do you
think a neurophysiologist would do it?
Now compute its Fourier transform
and compare.

[R195]

For the first question, well $\mbox{\rm sign}\left(\sin(2\pi t)\right)$ is a square wave with period one, and the authors have already plotted the “Spectral Amplitude” of this signal in several of their figures: It has energy at harmonics of the period, or 1, 3, 5, … Hz. So I would describe its periodicity as “periodic with period one,” I would describe it as oscillating between the constants of +1 and -1 every half second, I would say it is made up of an infinite number of sine-wave components with harmonically related frequencies $k$ Hz, and I would say it is very well approximated by the sum of just a handful of those components. That is, it is composed of a countable number of sine waves with closely related frequencies, the fundamental being one Hz.

As for the second question in the quote, judging by this paper, my answer would “Not very well at all.”

There’s more in the Perspective, but I think you get the idea.

My usual question applies. How the hell did this paper get past reviewers and an editor?