What does it mean when two functions are "orthogonal", why is it important?

I have often come across the concept of orthogonality and orthogonal functions e.g in fourier series the basis functions are cos and sine, and they are orthogonal. For vectors being orthogonal means that they are actually perpendicular such that their dot product is zero. However, I am not sure how sine and cosine are actually orthogonal. They are 90 out of phase, but there must be a different reason why they are considered orthogonal. What is that reason? Does being orthognal really have something to do with geometry i.e 90 degree angels?

Why do we want to have orthogonal things so often in maths? especially with transforms like fourier transform, we want to have orthogonal basis. What does that even mean? Is there something magical about things being orthogonal?


The concept of orthogonality with regards to functions is like a more general way of talking about orthogonality with regards to vectors. Orthogonal vectors are geometrically perpendicular because their dot product is equal to zero. When you take the dot product of two vectors you multiply their entries and add them together; but if you wanted to take the "dot" or inner product of two functions, you would treat them as though they were vectors with infinitely many entries and taking the dot product would become multiplying the functions together and then integrating over some interval. It turns out that for the inner product (for arbitrary real number L) $$\langle f,g\rangle = \frac{1}{L}\int_{-L}^Lf(x)g(x)dx$$ the functions $\sin(\frac{n\pi x}{L})$ and $\cos(\frac{n\pi x}{L})$ with natural numbers n form an orthogonal basis. That is $\langle \sin(\frac{n\pi x}{L}),\sin(\frac{m\pi x}{L})\rangle = 0$ if $m \neq n$ and equals $1$ otherwise (the same goes for Cosine). So that when you express a function with a Fourier series you are actually performing the Gram-Schimdt process, by projecting a function onto a basis of Sine and Cosine functions. I hope this answers your question!


Vectors are orthogonal not if they have a $90$ degree angle between them; this is just a special case. Actual orthogonality is defined with respect to an inner product. It is just the case that for the standard inner product on $\mathbb{R}^3$, if vectors are orthogonal, they have a $90$ angle between them. We can define lots of inner products when we talk about orthogonality if the inner product is zero. In the case of Fourier series the inner product is:

$$ \langle \, f ,g\rangle = \int_{-\pi}^{\pi} f(x) g(x)^* dx$$

and indeed $\langle \sin ,\cos\rangle = \int_{-\pi}^{\pi} \sin(x) \cos(x) dx = 0 $ as the integrand is odd.

And yes there is something special about things being orthogonal, in the case of the Fourier series we have an orthogonal basis $e_0(x), \dots, e_n(x),\dots$ of all $2\pi$ periodic functions. Given any function $f$ if we want to write $f$ in this basis we can compute the coefficients of the basis elements simply by calculating the inner product. Since:

$$ f(x) = \sum_{k= 0}^{\infty} a_k e_k(x)$$

$$ \langle \, f ,e_i\rangle = \int_{-\pi}^{\pi} f(x) e_i(x)^* dx = \int_{-\pi}^{\pi} \sum_{k= 1}^{\infty} a_k e_k(x) e_i(x)^* dx $$

$$ \sum_{k= 0}^{\infty} a_k \int_{-\pi}^{\pi} e_k(x) e_i(x)^* dx$$

And now magically by the orthogonality:

$$ = \sum_{k= 1}^{\infty} a_k \delta_{i,k} = a_i$$

So we can write any function directly in the orthogonal basis:

$$ f(x) = \sum_{k= 0}^{\infty} \langle \, f ,e_k\rangle e_k(x)$$


Orthogonality, as you seem to be aware, comes originally from geometry. We talk about two vectors (by which I mean directed line segments) being orthogonal when they form a right angle with each other. When you have orthogonal vectors, you can apply things like Pythagoras's Theorem, which is quite a neat theorem when you think about it, and should hint at some of the power of orthogonality.

The dot product allows us to talk about orthogonality a little more algebraically. Rather than considering directed line segments, we can consider elements of $\mathbb{R}^n$ instead. Orthogonality translates into the dot product equaling zero.

Now, the orthogonality you see when studying Fourier series is a different type again. There is a very common, widely-used concept of a vector space, which is an abstract set with some operations on it, that satisfies something like $9$ axioms, which ensures it works a lot like $\mathbb{R}^n$ in many respects. We can do things like add the "vectors" and scale the "vectors" by some constant, and it all behaves very naturally. The set of real-valued functions on any given set is an example of a vector space. It means we can treat functions much like vectors.

Now, if we can treat functions like vectors, perhaps we can also do some geometry with the, and define an equivalent concept of a dot product? As it turns out, on certain vector spaces of functions, we can define an equivalent notion to a dot product, where we can "multiply" two "vectors" (read: functions), to give back a scalar (a real number). Such a product is called an "inner product", and it too is defined by a handful of axioms, to make sure it behaves how we'd expect. We define two "vectors" to be orthogonal if their inner product is equal to $0$.

When studying Fourier series, you're specifically looking at the space of square-(Lebesgue)-integrable $L^2{[-\pi, \pi]}$, which has an inner product, $$\langle f, g \rangle := \int_{-\pi}^{\pi} f(x)g(x) \mathrm{d}x.$$ To say functions $f$ and $g$ are orthogonal means to say the above integral is $0$. Fourier series are just a series to express functions in $L^2{[-\pi, \pi]}$ as an infinite sum of orthogonal functions.

Now, we use orthogonality of functions because it actually produces really nice results. Fourier series are a very efficient way of approximating functions, and very easy to work with in terms of calculation. Things like Pythagoras's theorem still hold, and turn out to be quite useful! If you want to know more, I suggest studying Fourier series and/or Hilbert Spaces.