What should we understand from the definition of orthogonality in inner product spaces other than $\mathbb R^n$?
In the beginning of linear algebra courses, there are vectors in $\mathbb R^n$ and the dot product is introduced. We learn that if the dot product of two vectors is zero, then these vectors are called orthogonal and there is a right angle between them. Therefore, we understand perpendicularity from the definition of orthogonality.
In the last chapters, all previous notions are generalized as vector spaces and inner products are introduced. Then we learn vectors $\vec u$ and $\vec v$ are orthogonal if $\langle \vec u, \vec v \rangle=0$ in inner product spaces. Our previous knowledge guides us to seek a right angle between them.
Let $P$ be the vector space of first degree polynomials. Let $p(x)=ax+b$ and $q(x)=cx+d$. Define $\langle p(x), q(x) \rangle$ as
$$\langle p(x), q(x) \rangle=ac+bd$$
Then if we consider $p(x)=x-2$ and $q(x)=4x+2$, we find that $\langle p(x), q(x) \rangle=0$. But if we draw them, we see that there is $30.96$ degrees between them. So orthogonality in inner product spaces does not necessarily mean perpendicularity. If not perpendicularity, what should we understand from orthogonality in inner product spaces?
Solution 1:
The evolution leading to the definition of an inner product was a long process that over a century. It started around 1760 with the observation that the following functions $$ 1,\cos x,\sin x,\cos 2x,\sin 2x,\cos 3x,\sin 3x,\cdots $$ had the property that if you integrated any two different ones against each other over $[-\pi,\pi]$ you got $0$ for the answer. In that way Mathematicians figured out that if a function $f$ could be written as $$ f = a_0 1 + a_1 \cos x + b_1 \sin x + a_2 \cos 2x + b_2 \sin 2x + \cdots, $$ then you could multiply both sides by one of the functions, integrate, and all of the terms on the right would drop out except for one of them. For example, $$ \int_{-\pi}^{\pi}f(x)\sin 3x dx= a_3\int_{-\pi}^{\pi}\sin3x\sin 3x dx, $$ which would then tell you what $a_3$ would have to: $$ a_3=\frac{\int_{-\pi}^{\pi}f(x)\sin 3x dx}{\int_{-\pi}^{\pi}\sin3x\sin3x dx} $$ That was an amazing observation that took a seemingly intractable equation in infinitely many variables, and allowed one to isolated each variable. That all started in the late 1700's and early 1800's.
It was later discovered that such sequences of "orthogonal" functions were not so unusual, which was surprising in itself. It was realized that the operation of isolating coefficients was analogous to what happens in Euclidean space with dot product. Mathematics took the incredible leap of beginning to view functions as points in an infinite-dimensional space, with geometry and distance defined on them. That started when Hilbert gave an axiomatic definition of inner product space in the early 1900's, about 140-150 years after the first observations.
You can think in terms of orthogonal projection, can consider functions as points in a space, and can consider convergence of functions as a distance measure between points in a space (the points just happen to be functions.) Everything works out. For sequences of orthogonal functions, you have a vector containing the numbers $f\sim (a_0,a_1,b_1,a_2,b_2,\cdots)$, $g\sim (a_0',a_1',b_1',a_2',b_2',\cdots)$ and you can think of dealing in $\mathbb{R}^{\infty}$. Dot products work the same way: $$ \frac{1}{\pi}\int_{-\pi}^{\pi}f(x)g(x)dx=\frac{1}{2}a_0a_0'+a_1a_1'+b_1b_1'+a_2a_2'+b_2b_2'+\cdots . $$ Norms work this way $$ \frac{1}{\pi}\int_{-\pi}^{\pi}f(x)^{2}dx = \frac{1}{2}a_0^{2}+a_1^{2}+b_1^{2}+a_2^{2}+b_2^{2}+\cdots. $$ Orthogonal and closest-point projection are the same; you just deal with infinite sums of coordinates instead of finite-dimensional vectors. So this becomes the study of the geometry of infinite-dimensional space. Soon you forget that the points may be functions or other complex objects because the Math is the same. Suddenly you're working with an infinite-dimensional space where the points are functions, etc., and dot product is an abstraction of Euclidean dot product. The new dot product doesn't correspond in any simple way to the graphs of those functions, or to any other natural way of relating the functions. There's an integral that works like a dot product, and you proceed by analogy. You have an abstract inner product defined on the points in an abstract vector space, and it looks like Euclidean space in infinite dimensions.
Solution 2:
The angle between the graphs is not the angle between the polynomials. That definition of inner product is telling you how (in this case) they want to define the angle between first-degree polynomials.
You will probably see later in the course situations where vector spaces other than $\mathbb R^N$ can be given an inner product (that is, can have "angle" defined for it) in such a way that it helps in solving some problem or other.
One advantage of abstract mathematics is that general definitions can be made which can be specialized to solve many different problems. But understanding this kind of abstraction takes time and effort from beginning mathematics students.