Fourier - Are sinusoidals strictly required?

Solution 1:

This question has a very general answer given by the Stone–Weierstrass theorem. This says, in the situation to which Fourier series apply,

Let $C[a,b]$ be the space of continuous functions on the interval $[a,b]$ and let $A \subset C[a,b]$ be a set, closed under addition, multiplication, and scaling, having the two properties:

  • For every $x \in [a,b]$, there is some $f \in A$ such that $f(x) \neq 0$, and
  • For every $x, y \in [a,b]$, there is some $f \in A$ such that $f(x) \neq f(y)$.

Then every function in $C[a,b]$ is the uniform limit of elements of $A$.

The connection with Fourier series is that we can take $A$ to be the set of functions generated by $\sin(x)$ and $\cos(x)$ on (in this case) $[-\pi, \pi]$ by addition, multiplication (including the $0$'th power), and scaling. Since you can check that $\sin(x)$, $\cos(x)$, and the constant function $1$ can never simultaneously either vanish or be equal, $A$ satisfies the conditions of the theorem, showing that every continuous function on $[-\pi, \pi]$ is the uniform limit of "trigonometric polynomials": literally, polynomials in $\sin(x)$ and $\cos(x)$, which using trig identities (or complex exponentials, which is the same but easier) you can show are the same as expressions of the form $$\sum_{n = 0}^N (a_n \cos(nx) + b_n \sin(nx)).$$

Using other basic functions, the theorem gives a simple criterion for checking whether their "polynomials" can be used to approximate other functions. The advantage of Fourier series here is that the approximating polynomials are partial sums of a single infinite series whose coefficients can be computed by an inner product (an integral). That is, for example, one can use polynomials such as the Bernstein polynomials to approximate continuous functions as well, but as the degree of the polynomials approximating any one function increase, the coefficients all change, not just the highest ones. This is simply because these polynomials do not form an orthonormal set with respect to an inner product. Other such "orthogonal polynomials" exist in plenty, such as Legendre polynomials, and they do not have this problem.

In short, the scheme that allows Fourier series to work can be generalized partially by replacing the trigonometric functions with an arbitrary orthonormal basis for some inner product on $C[a,b]$, and generalized even farther by just using any subalgebra of $C[a,b]$ that "separates points and vanishes nowhere", but then the approximants are not as well-behaved individually.

Solution 2:

That's a good question, and the answer can be deep and open up large areas of math. The Fourier transform (in its various incarnations) takes a function and writes it as a linear combination of basis vectors -- and the basis it uses is a special basis, the Fourier basis, which is a basis of eigenvectors for the shift operator. It's possible to use different bases (perhaps a basis of eigenvectors for a different operator) and this will give you different transforms. This is related to the spectral theorem in linear algebra and in functional analysis.

In applied math, the discrete Fourier transform isn't always the one we want. For example, to compress a signal we may prefer to use a wavelet transform or some other transform.

Solution 3:

Can there be an another transform like Fourier somewhere in the universe that can explain all signals as sum of rectangulars or triangulars(or any periodic shape)?

Yes, there are several popular transforms similar to Fourier transforms, except using something other than both sines and cosines.

Every day people watch digitized video or listen to digitized audio that uses several such transforms, usually without even realizing it.

For example,

  • The Fourier transform decomposes a signal into a sum of both infinitely-long sine waves and infinitely-long cosine waves.
  • The discrete cosine transform (DCT) is used in many image compression and video compression systems, including JPEG, MPEG, and Theora. The DCT decomposes a signal into a sum of cosine waves.
  • The modified discrete cosine transform (MDCT) is used in a few more recent image and video compression systems and many audio compression systems, such as MP3, AAC, and Vorbis, since it reduces block-boundary artifacts typical of systems based on DCT. The MDCT also decomposes a signal into a sum of cosine waves.
  • The discrete sine transform decomposes a signal into a sum of sine waves. I am unaware of any practical use.
  • The Hartley transform and discrete Hartley transform (DHT) decomposes a signal into a sum of cas waves, where cas(t) = cos(t) + sin(t). It seems to have several advantages over the Fourier transform, but I am unaware of any practical use.
  • The Walsh transform -- aka Hadamard transform -- decomposes a signal into a sum of infinitely-long even square waves and odd square waves. It is used in JPEG XR and H.264 and also seems to be useful for quantum computing.

Time-frequency transforms are useful for making waterfall plots and spectrograms which are used in a variety of fields -- speech training; studies of animal sounds; radar; seismology; ham radio operators use them to discover what frequencies and protocols people are using and to decode very slow Morse code "by eye"; etc.

  • The short-time Fourier transform is a time-frequency transformation that decomposes a signal into a sum of short pieces of sine and cosine waves, each piece localized in time.
  • Various wavelet transforms is an even more popular time-frequency transformation. The simplest is the Haar transform, which uses a mother wavelet composed of 2 square pulses.
  • chirplet transform is another time-frequency transform.
  • the fractional Fourier transform (FRFT), which I find quite unexpected and surprising that such a thing is even possible, but I am unaware of any practical use.