Algebraic field extensions: Why $k(\alpha)=k[\alpha]$.
If $K$ and $k$ are fields, $K\supset k$ is a field extension and $\alpha \in K$ is algebraic over $k$, then we denote by $k[\alpha]$ the set of elements of $K$ which can be obtained as polynomial expressions of $\alpha$.
$$k[\alpha] = \left\{P(\alpha): \quad P\in k[X] \right\} $$
Also, we denote by $k(\alpha)$ the smallest subfield of $K$ containing both $k$ and $\{\alpha\}$. This is easily seen to be equal to the set of fractions of polynomial expressions in $\alpha$ (just thinking about the way the "smallest" subfield must be generated):
$$k(\alpha) = \left\{\dfrac{P(\alpha)}{Q(\alpha)}: \quad P,Q\in k[X] \text{ and } Q(\alpha)\neq 0 \right\} $$
If $M$ is the minimal polynomial of $\alpha$, it is easily shown that $k[\alpha]\simeq k[X]/\langle M\rangle$ is a field and therefore a subfield of $k(\alpha)$ containing both $k$ and $\{\alpha\}$. Therefore $k(\alpha)=k[\alpha]$, because $k(\alpha)$ is the smallest.
It follows that all quotients $\dfrac{P(\alpha)}{Q(\alpha)}$ are equal to $H(\alpha)$ for some polynomial $H\in k[X]$.
What I want to see is a more direct proof of this fact. Given $P$ and $Q$, how do you produce this $H$?
I'll write $a$ instead of $\alpha$. Let $f$ be its minimal polynomial.
Suppose $Q$ is a polynomial such that $Q(a)\neq0$. Then $f$ and $Q$ are coprime, so there exists polynomials $u$ and $v$ such that $uf+vQ=1$, and evaluating at $a$ we see that $1=v(a)Q(a)$ so that $1/Q(a)=v(a)$.
It is clear that $k[\alpha]\subseteq k(\alpha)$, so all we need to show is that $k[\alpha]$ is a field. Indeed, $k(\alpha)$ is contained in every subfield of $K$ that contains $\alpha$.
Let $\beta\in k[\alpha]$, $\beta\notin k$. Then $\beta$ is algebraic over $k$; indeed, the powers of $\beta$ cannot be linearly independent over $k$, as $k[\alpha]$ is finitely generated as a vector space over $k$ (you need just polynomial division with remainder to show this).
Let $a_0+a_1\beta+\dots+a_n\beta^n=0$ with $a_0\ne0$ (this surely exists, why?). Then $n\ge1$ and so $$ \beta\frac{a_1+a_2\beta+\dots+a_n\beta^{n-1}}{-a_0}=1 $$ so $\beta^{-1}\in k[\alpha]$.
Alternative to Bezout is linear algebra: $\ k[\alpha] \,\cong\, k[x]/f\,$ is a finite-dimensional $\,k$-vector space. If $\ \color{#c00}{g\not\equiv 0}\ $ then, by below, the linear map $\ \ell\,:\, h\,\mapsto\, gh\,$ is $\,1$-$1\,$ so onto, so $\ gh = 1\,$ for some $\,h.$
$$ h\in \ker\,\ell \iff g h \equiv 0\!\!\!\pmod{\! f} \iff f\mid gh\ \!\!\!\overset{\large\ \ \ {\rm prime}\ \color{#c00}{f\,\nmid \,g}}\iff\ f\mid h \iff h\equiv 0\!\!\!\pmod{\! f} $$
Remark $ $ From this linear viewpoint, we can compute the inverse by simply inverting the matrix that represents the linear map. For example, let's consider $\,\Bbb C \cong \Bbb R[i]\,\cong \Bbb R[x]/(x^2+1).\,$
The matrix rep of $\rm\:\alpha = a+b\,{\it i}\:$ is simply the matrix representation of the $\:\Bbb R$-linear map $\rm\:x\to \alpha\, x\:$ viewing $\,\Bbb C\cong \Bbb R^2$ as vector space over $\,\Bbb R.\,$ Computing the coefficients of $\,\alpha\ne 0\,$ wrt basis $\,[1,\,{\it i}\,]^T\:$
$$\rm (a+b\,{\it i}\,) \left[ \begin{array}{c} 1 \\ {\it i} \end{array} \right] \,=\, \left[\begin{array}{r}\rm a+b\,{\it i}\\\rm -b+a\,{\it i} \end{array} \right] \,=\, \left[\begin{array}{rr}\rm a &\rm b\\\rm -b &\rm a \end{array} \right] \left[\begin{array}{c} 1 \\ {\it i} \end{array} \right]\qquad$$
Now inverting $\,\alpha\,$ amounts to simply inverting it in matrix representation
$$ \rm \dfrac{1}{\alpha}\ =\ (a + b\,{\it i }\,)^{-1}\ \leftrightarrow\ \left[\begin{array}{rr}\rm a &\rm b\\\rm -b &\rm a \end{array} \right]^{-1}\! =\ \dfrac{1}{a^2+b^2}\left[\begin{array}{rr}\rm a &\rm -b\\\rm b &\rm a \end{array} \right]\ \leftrightarrow\ \dfrac{a-b\,{\it i}}{a^2+b^2} \ =\ \dfrac{\bar \alpha}{\alpha\bar\alpha}\qquad $$
which may be viewed as a generalization of rationalizing the denominator.