I can't understand a definition and a theorem about vector spaces and linear (in)dependency of a set of vectors

Solution 1:

The definition of linear independence that you gave is the classical one, but it doesn’t help much with intuition, in my view.

I think it’s better to start by defining linear dependence: a set of vectors is linearly dependent if one of them can be written as a linear combination of the other ones. That should make sense because it closely matches the meaning of “linearly dependent” in normal (non-mathematical) English. Then, of course a set of vectors is said to be linearly independent if they are not linearly dependent.

This definition is equivalent to the classical one you cited, but it makes a lot more sense (to me, anyway).

So, think of the three vectors $a = (1,0,0)$, $b=(0,1,0)$, and $c=(2,5,0)$ in $\mathbb R^3$. Obviously $c = 2a + 5b$. So these vectors are linearly dependent. It’s also true that $2a + 5b - c = 0$, so we have a (non-silly) linear combination that gives us zero, as required by the classical definition, but this is rather less intuitive, in my view.

In fact, in $\mathbb R^3$, three vectors are linearly dependent if and only if they are coplanar. This should make sense —- if $c$ lies in the same plane as $a$ and $b$, then $c$ can be written as a linear combination of $a$ and $b$. My example above is just a rather trivial example of three coplanar vectors. I expect you can invent more interesting ones.

If two vectors are orthogonal, then it’s pretty obvious that neither can be written as a multiple of the other, so they’re certainly linearly independent. But, as you point out, this is a very special case, and it’s easy to find examples of pairs of vectors that are linearly independent without being orthogonal.

Solution 2:

You ask in particular:

"If we have a set of linearly independent vectors then we can't get 0 vector after their linear combination, isn't it true?"

This is not true, but it's not entirely wrong either. For simplicity consider a set of two vectors $\{u,v\}$. We certainly can write the $0$-vector, which I'll call "$\overline{0}$" to distinguish it from the $0$-scalar, as a linear combination of $u$ and $v$ even if they are linearly independent: just take $$0u+0v.$$

And of course this works for any set of vectors whatsoever. A set of vectors is linearly independent iff there is no non-silly way of producing $\overline{0}$ as a linear combination.

Separately re: the end of your question, you are right that that theorem is a narrow case, but $(i)$ sometimes narrow cases are useful and $(ii)$ the idea of orthogonality is separately quite important.