Importance and Intuition of Polynomial Rings
Solution 1:
Great question! Polynomial rings really get glossed over in my opinion, when they are actually quite complicated objects.
Super formally, we define the polynomial ring $R[x]$ as follows. Let $S$ be the set of all sequences $(r_0, r_1, r_2,\ldots)$ where the $r_i$ are elements of $R$ and only finitely many are nonzero. We explicitly define operations $+$ and $\cdot$ on this set by $$ (a_0, a_1, a_2,\ldots) + (b_0, b_1, b_2, \ldots) = (a_0 + b_0, a_1 + b_1, a_2 + b_2, \ldots) $$ and $$ (a_0, a_1, a_2, \ldots) \cdot (b_0, b_1, b_2, \ldots) = (a_0b_0, a_1b_0 + a_0b_1, a_2b_0 + a_1b_1 + a_0b_2, \ldots). $$ That is, if $a = (a_i)$ and $b = (b_i)$, then $(a+b)_i = a_i + b_i$ and $(ab)_i = \sum_{k=0}^i a_kb_{i-k}$.
It is easy to check that $(S, +, \cdot)$ is a ring with zero $(0, 0, 0, \ldots)$ and identity $(1, 0, 0, \ldots)$.
Furthermore, as an $R$-algebra, $S$ is generated by the element $x = (0, 1, 0, 0, \ldots)$ (if you don't know what an $R$-algebra is, the point is that we can "get to" every element of $S$ using just the element $x$ and the elements of $R$). To see this, note that for each $n$, the element $x^n$ is the sequence $(0, \ldots, 0, 1, 0, \ldots)$ with a $1$ in the $n^\text{th}$ entry and zeros elsewhere. The element $(a_0, a_1, \ldots)$ of $S$ is equal to $$ a_0 + a_1x + a_2x^2 + \ldots. $$ Thus we can "get to" $(a_0, a_1, \ldots)$ just by raising $x$ to various powers and multiplying by elements of $R$.
We denote this ring by $R[x]$. The intuition is that we are "adjoining" an "indeterminate" $x$ to the ring $R$, which means that we are adding some element that has no constraints on it. In some sense, the element $x$ is "free". In practice, this is how we always think about polynomials.
Polynomial rings are extremely important. Without them, mathematics would basically not be possible. The most ubiquitous example I can think of is in Linear Algebra, where the theory of polynomial rings allows us to prove results like the Cayley-Hamilton Theorem. Another example is in the theory of probability generating functions. Polynomials are also vital to number theory and geometry. Every field of mathematics I can think of is built at least implicitly on the theory of polynomials.
Edit I got a bit carried away with advanced topics in my examples. For a more basic one, we need the theory of polynomial rings to understand even basic things like factorising quadratics. To prove that factorisation exists and is unique, you need to understand the ring $\mathbb{R}[x]$. Factorising such polynomials is very useful, as any high school student will tell you.
Solution 2:
The important thing to understand about polynomials is that they are templates for maps between rings. If you know a bit about coding, you'll probably know the concept that a template is something that can take different types of objects as inputs in a way that you don't have to specify what the objects are beforehand. For a polynomial, this means that it's a template for maps which take objects for which the following operations are meaningful:
- multiplication of two or more of the objects
- multiplication of an object with an element of the underlying ring
- addition of two or more objects
For instance, take the polynomial $f=2X^2+4X-1\in\mathbb R[X]$. This polynomial defines a map $$\begin{align}f_{\mathbb R}:&\mathbb R\longrightarrow\mathbb R,\\&x\mapsto 2x^2+4x-1. \end{align}$$ But it also defines a map $$\begin{align} f_\mathbb C:&\mathbb C\longrightarrow\mathbb C,\\ &z\mapsto 2z^2+4z-1 \end{align}$$ Or even more exotic, it defines a map $$\begin{align} f_{\operatorname{Mat}_{n\times n}(\mathbb R)}:&\operatorname{Mat}_{n\times n}(\mathbb R)\longrightarrow \operatorname{Mat}_{n\times n}(\mathbb R)\\ &A\mapsto2A^2+4A-1 E_n. \end{align}$$ This last one is important to formulate the Cayley-Hamilton theorem, for instance, which says that if $f$ is the characteristic polynomial of the matrix $A$, then $f_{\operatorname{Mat}_{n\times n}(\mathbb R)}(A)$ is the zero matrix.
More formally, this means that if $R$ is a commutative ring and $R\subseteq S$ a commutative ring extension, then for every $s\in S$ there is a unique evaluation map $R[X]\longrightarrow S$ which maps $X\mapsto s$ and $r\mapsto r$ for all $r\in R$, and is a homomorphism. The map evaluates each polynomial at $s$ (it "plugs in" $s$ for $X$). This property of the polynomial ring even uniquely characterizes it (up to isomorphism), and some will take this property as the definition of the polynomial ring. The construction via finite sequences in $R$ can then be used as proof of existence of such a ring.
In our examples from before, $\mathbb R\subseteq\mathbb R$ is a trivial ring extension. $\mathbb R\subseteq\mathbb C$ is another obvious ring extension which also gives us an evaluation map. And $\operatorname{Mat}_{n\times n}(\mathbb R)$ can also be viewed as a ring extension of $\mathbb R$ by identifying $r\in R$ with $rE_n\in\operatorname{Mat}_{n\times n}(\mathbb R)$, the matrix whose diagonal contains only $r$. This gives us a way to evaluate polynomials at matrices.
To summarize: Polynomials are there as templates for maps between ring extensions of the underlying ring - but only those special maps which are the result of applying these basic operations to an element $x$: addition, multiplication by an element of $R$, or multiplication by itself.