Polynomials as vector spaces?
Can someone please explain how polynomials are vector spaces? I don't understand this at all. Vectors are straight, so how could a polynomial be a vector. Is it just that the coefficients are vector spaces? I'm really not understanding how polynomials tie in to linear algebra at all? I can do all the problems with them, because they are done the same way as any other vector problem, but I can't understand intuitively what I'm doing at all.
Solution 1:
A (real) vector space means, by definition any set $V$ together with operations ${+_V}: V\times V\to V$ and ${\times_V}: \mathbb R\times V \to V$, such that
- $+_V$ is associative and commutative and has a neutral element, and has inverses for every element of $V$.
- $\times_V$ associates with real multiplication: $a\times_V(b\times_V v)=ab\times_V v$ for all $a,b\in \mathbb R$ and $v\in V$.
- $\times_V$ distributes over $+_V$ in the sense that $a\times_V(v+_Vw)=(a\times_V v)+_V(a\times_V w)$ and $(a+b)\times_V v=(a\times_V v)+_V(b\times_V v)$for all $a,b\in \mathbb R$ and $v,w\in V$.
It happens that if you take the set of all polynomials together with addition of polynomials and multiplication of a polynomial with a number, the resulting structure satisfies these conditions. Therefore it is a vector space -- that is all there is to it.
It can be useful intuitively to visualize vectors as little arrows (or however you're used to thinking about geometric vectors), but this is just intuition which may or may not be helpful -- it is deliberately not part of the linear-algebra concept of a vector space.
Solution 2:
When we imagine vectors, we all see arrows from our undergraduate physics lessons. And it's fine. However, what is a vector space ? Well it's any set where you can add elements between them with proportionality factors. And you can definitely add polynomials and multiply them with reals.
I think you just need to accept that the graphical representation you like is only a way to remember the only properties that define a vector : you can add them together, and you can "extend" them by multiplication with a real (or a complex number of course...).
I've always thought that "vector space" is too complex a word to simply represent the possibility of adding together and "extending" the elements of a set.
Solution 3:
A vector space has two kinds of things: vectors and scalars. It must be possible to add two vectors, and it must be possible to multiply a vector by a scalar. There are also some laws that the addition and the multiplication must obey, such as $s\cdot(\mathbf{a} + \mathbf{b}) = s\cdot\mathbf{a} + s\cdot\mathbf{b}$.
One example of this is to take the vectors to be $4$-tuples of real numbers, say $\langle a,b,c,d\rangle$, and the scalars to be single real numbers. We can add two vectors (by adding them component-wise) and we can multiply a vector by a scalar, multiplying each of the components of the vector by the scalar.
Another example of this is to take the vectors to be $2\times 2$ matrices of real numbers $$\begin{pmatrix}a&b\\c&d\end{pmatrix}$$ and the scalars to be single real numbers. We can add two of these vectors using ordinary matrix addition, and we can multiply a matrix by a scalar, multiplying each of its entries by the single number.
Of course this example is exactly the same as the one in the previous paragraph. The vectors behave exactly the same way whether we write them as $\begin{pmatrix}a&b\\c&d\end{pmatrix}$ or as $\langle a,b,c,d\rangle$. It doesn't matter whether the brackes are curvy or angled, or whether we write the numbers piled up or not.
Here is another example: take the vectors to be third-degree polynomials, say $ax^3 +bx^2+cx+d$, and the scalars to be real numbers. We can add two of these vectors using ordinary polynomial addition, and we can multiply a vector by a scalar by multiplying the coefficients of the polynomial by the single number.
This example is just like the previous two. We could abbreviate $ax^3+bx^2+cx+d$ as $\langle a,b,c,d\rangle$ (since the parts involving $x$ are aways the same, we could just agree not to write them down) or as $\begin{pmatrix}a&b\\c&d\end{pmatrix}$. It doesn't matter if we write the four numbers in a line, or in a pile, or with $x^3$es and plus signs in between; the vectors are still behaving the same way.
So why bother to do it? This is one of the great strengths of mathematics, to see when two different-seeming kinds of objects are actually the same, and to develop an abstract theory that applies in many different situations. Once we develop a theory of vector spaces, we can apply that theory to all sorts of things that behave like vectors—such as polynomials—even if we don't normally think of them as vectors. In short, by understanding polynomials as vectors, we can use the theory of vector spaces to help us solve problems that are about polynomials.
For example, in Constructing a degree 4 rational polynomial satisfying $f(\sqrt{2}+\sqrt{3}) = 0$ I used the theory of vector spaces to show that the required polynomial actually exists, and my method for calculating that polynomial draws heavily on the theory of vector spaces; it boils down to the problem of finding the coordinates of a certain vector in a different basis.
Solution 4:
The ring of polynomials with coefficients in a field is a vector space with basis $1, x, x^2,x^3,\ldots$. Every polynomial is a finite linear combination of the powers of $x$ and if a linear combination of powers of $x$ is 0 then all coefficients are zero (assuming $x$ is an indeterminate, not a number).