What really is ''orthogonality''?
I know that we can define two vectors to be orthogonal only if they are elements of a vector space with an inner product.
So, if $\vec x$ and $\vec y$ are elements of $\mathbb{R}^n$ (as a real vector space), we can say that they are orthogonal iff $\langle \vec x,\vec y\rangle=0$, where $\langle \vec x,\vec y\rangle $ is an inner product.
Usually the inner product is defined with respect to the standard basis $E=\{\hat e_1,\hat e_2 \}$ (for $n=2$ to simplify notations), the standard definition is: $$ \langle \vec x,\vec y\rangle_E=x_1y_1+x_2y_2 $$ Where $$ \begin{bmatrix} x_1\\x_2 \end{bmatrix} =[\vec x]_E \qquad \begin{bmatrix} y_1\\y_2 \end{bmatrix} =[\vec y]_E $$ are the components of the two vectors in the standard basis and, by definition of the inner product, $\hat e_1$ and $\hat e_2$ are orho-normal.
Now, if $\vec v_1$ and $\vec v_2$ are linearly independent the set $V=\{\vec v_1,\vec v_2\}$ is a basis and we can express any vector in this basis with a couple of components: $$ \begin{bmatrix} x'_1\\x'_2 \end{bmatrix} =[\vec x]_V \qquad \begin{bmatrix} y'_1\\y'_2 \end{bmatrix} =[\vec y]_V $$ from which we can define an inner product: $$ \langle \vec x,\vec y\rangle_V=x'_1y'_1+x'_2y'_2 $$
Obviously we have: $$ [\vec v_1]_V= \begin{bmatrix} 1\\0 \end{bmatrix} \qquad [\vec v_2]_V= \begin{bmatrix} 0\\1 \end{bmatrix} $$ and $\{\vec v_1,\vec v_2\}$ are orthogonal (and normal) for the inner product $\langle \cdot,\cdot\rangle_V$.
This means that any two linearly independent vectors are orthogonal with respect to a suitable inner product defined by a suitable basis. So orthogonality seems a ''coordinate dependent'' concept.
The question is: is my reasoning correct? And, if yes, what make the usual standard basis so special that we chose such basis for the usual definition of orthogonality?
I add something to better illustrate my question.
If my reasoning is correct than, for any basis in a vector space there is an inner product such that the vectors of the basis are orthogonal. If we think at vectors as oriented segments (in pure geometrical sense) this seems contradicts our intuition of what ''orthogonal'' means and also a geometric definition of orthogonality. So, why what we call a ''standard basis'' seems to be in accord with intuition and other basis are not?
Solution 1:
To expand a bit on Daniel Fischer’s comment, coming at this from a different direction might be fruitful. There are, as you’ve seen, many possible inner products. Each one determines a different notion of length and angle—and so orthogonality—via the formulas with which you’re familiar. There’s nothing inherently coordinate-dependent here. Indeed, it’s often possible to define inner products in a coordinate-free way. For example, for vector spaces of functions on the reals, $\int_0^1 f(t)g(t)\,dt$ and $\int_{-1}^1 f(t)g(t)\,dt$ are commonly-used inner products. The fact that there are many different inner products is quite useful. There is, for instance, a method of solving a large class of interesting problems that involves orthogonal projection relative to one of these “non-standard” inner products.
Now, when you try to express an inner product in terms of vector coordinates the resulting formula is clearly going to depend on the choice of basis. It turns out that for any inner product one can find a basis for which the formula looks just like the familiar dot product.
You might also want to ask yourself what makes the standard basis so “standard?” If your vector space consists of ordered tuples of reals, then there’s a natural choice of basis, but what about other vector spaces? Even in the Euclidean plane, there’s no particular choice of basis that stands out a priori. Indeed, one often chooses an origin and coordinate axes so that a problem takes on a particularly simple form. Once you’ve made that choice, then you can speak of a “standard” basis for that space.
Solution 2:
Your reasoning is good, and, as Daniel Fisher said, the assertion "be orthogonal to" depends only on your inner product.
What makes the standard basis $$\mathcal{C}=(e_1,\dots,e_n) \hspace{0.5cm}\text{ with }\hspace{0.5cm} e_i=(0,\dots,0,\underset{i \text{ rank}}{\underbrace{1}},0,\dots,0)$$ special when you are working in $\mathbb{R}^n$ with an inner product $\langle\cdot,\cdot\rangle$ is that you have the two following points:
$\forall x\in\mathbb{R}^n,$ $[x]_\mathcal{C}={}^tx,$
If $\mathcal{F}=(f_1,\dots,f_n)$ is an orthonormal basis of $(\mathbb{R}^n,\langle\cdot,\cdot\rangle)$ (which always exists according to Gram-Schmidt's process), then you have : $$\forall u,v\in\mathbb{R}^n, \langle u,v\rangle={}^t[u]_\mathcal{F}[v]_\mathcal{F}=\langle {}^t[u]_\mathcal{F},{}^t[v]_\mathcal{F}\rangle_\mathcal{C}=\langle U, V\rangle_\mathcal{C}.$$
In other words: you can easily compute the coordinates of your vectors in this basis, and, after you have chosen a "good" basis, your inner product can always be written as the inner product given by your method in the standard basis.
Edit : For the geometrical part, I will try to detail my comment : when you represent a vector in the plane, what you really do is to choose a base and draw the (vector of) coordinates in the plane (you well-understand this process when you consider vectorial spaces which are not $\mathbb{R}^n$, the difference being that you can not immediately see the vectors as $n-$tuples and that you have to consider a basis to see the things vectorially).
Then, your question is :
Which basis should I consider to see orthogonality as I am used to see, i.e. that my vectors do a "right angle" ?
(The good definition of angle comes from euclidian geometry, here I suppose that we understand what we want to see on the drawing). And the answer is the second point I already noted : we are in the usual case of $\mathbb{R}^n$ and its usual inner product when we considerate a orthonormal basis for your inner product.
For example, if you consider $(v_1,v_2)=\big((1,0),(1,1)\big)$ and the inner product $\langle x,y\rangle_V={}^t[x]_V[y]_V,$ then as $v_1$ and $v_2$ are not orthogonal in $\mathbb{R}^2$ for the usual inner product, then they won't appear as orthogonal vectors represented in the standard basis (the "angle" between the two being $45^°$), but if you represent them in the base $(v_1,v_2),$ which is orthonormal for your inner product, they will appear as orthogonal vectors.
Solution 3:
The unit base vectors $e_1$,$e_2$: form an orthonormal basis.
Let us take a look what this requires:
- all elements in $M:=\{e_1,e_2,...,e_n \}$ are linearly independent
- the vectors $e_1$,$e_2$,...,$e_n$ are pairwise orthogonal, so their dot product $\langle e_i,e_k \rangle=0 $, $\forall i,k \leq n$ and $i\neq k$
- they are normalized, so $||e_i||=1$
You are right, there are more vectors that fulfill this criteria. In fact, there is an uncountably infinite amount of them. But what makes the standart basis so appealing?
- They are easy to calculate
For any vector $e_i$ you set all but one position to zero. The only nonzero entry has the only requirement, that its position is unique compared to the other unit vectors.
If you tried to create a basis starting from a non-unit vector $v$, you would have to ensure that they are linearly independent, pairwise orthogonal and normalized.
- They have good numerical properties You will not run into rounding errors while trying to calculate $||e_i||$ This is a mighty advantage.
Furthermore you ask, if the dot product is a coordinate dependent concept. Remember that a vector $v=\left( \begin{array}{c} a\\ b\\ \end{array} \right)$ has a magnitude and a direction. So the entries $a,b$ have a direct relationship on the direction the vector $v$ will be facing. So in a sense, yes, the dot product is as coordinate dependent as the overall appearance of the vector is.
But what happens if you convert your vector $v$ to polar coordinates, such that $v=(angle, length)$. What would you mean by coordinate dependent now?
So instead of saying the dot product is coordinate dependent, say it is magnitude and angle dependent. That is even how the dot product is defined, because
$\langle v,u \rangle=||u||||v||cos(alpha)$