When does a system of polynomial equations have infinitely many solutions?

This question turns out to be deeper than it seems, since it has a much richer theory than the linear case. In particular, it's one of the questions leading to elimination theory.

To illustrate why this problem is more complex than it seems, let me give an example of system of polynomials that can have either zero, finitely many or infinitely many solutions depending on what kind of solutions you are looking for:

$$ \{ x^2-2, y^2 + z^2 \}$$

First, note that this system has no rational solutions, since there is no $x \in \mathbb{Q}$ such that $x^2 = 2$. Next, over the real numbers, the system has exactly two solutions, $x = \pm\sqrt{2}, y = z = 0$, since for any $y, z \in \mathbb{R}$ where $y \neq 0$ or $z \neq 0$, $y^2+z^2 > 0$. Finally, over the complex numbers, this system has infinitely many solutions, namely those where $x = \sqrt{2}$ and $y = i z$ or $y = -i z$.

Over the complex numbers (or more generally, over an algebraically closed field), one can say that we have $n$ variables and $k$ polynomials, where $k < n$, there are either no solutions or infinitely many solutions, using a more general notion of dimension (a key ingredient in this argument is Krull's Principal Ideal Theorem). Over other fields, these questions turn out to be significantly more complex.

Algorithmically, one would use Buchberger's algorithm to transform the set of polynomials into a Gröbner Basis (using a lexicographic term ordering), which generalizes row-echelon form. If we have $n$ variables $x_1, \ldots, x_n$ and the Gröbner basis contains a polynomial containing only variables $x_i, \ldots, x_n$ for each $i = 1, \ldots, n$, we know that the system can have only finitely many solutions. Conversely, if the system does not contain such polynomials for one of the $i$, and the field we're interested in is algebraically closed, we know that there must be infinitely many solutions.

Edit: $x^2 = 2$ has two solutions, not one. Thanks @lisyarus!


Question: "However, how would you prove that a system of polynomial equations with fewer equations than variables has infinitely many solutions?"

Answer: There is a result in commutative algebra (look at the Matusumura book "Commutative ring theory", theorem 14.1, page 105) saying that if $(A,\mathfrak{m})$ is a noetherian local ring and $x_1,..,x_r$ is a system of parameters then

$$M1.\text{ }dim(A/(x_1,..,x_i))=r-i.$$

Intuitively view each element $x_i$ as an "equation" in $r$ variables, and $dim(A/(x_1,..,x_i))$ as the dimension of the "space of solutions" to the system of equations

$$M2. \text{ }x_1=\cdots =x_i=0,$$

then M1 says that the system M2 has an "infinite set of solutions" iff $i<r$.

"Intuitively" if $I:=(f_1, \cdots , f_l)\subseteq B:=k[x_1,..,x_n]$ is an ideal of polynomials in the variables $x_i$ over a field $k$, we let $A:=k[x_1,..,x_n]/I$ be the coordinate ring of the algebraic variety/scheme defined by the set of polynomials $f_i$. The ring $A$ is a commutative unital ring reflecting properties of the "set of solutions" of the system of equations

$$M3.\text{ }f_1=\cdots =f_l=0.$$

We define $X:=Spec(A)$ as the "set of prime ideals" in the ring $A$ and give this set a topology (the Zariski topology). If the ideal $I$ is a prime ideal it follows $X$ is an irreducible topolpogical space. The dimension of $X$ as an algebraic variety may be defined using the ring $A$: we define $dim(X):=dim(A)$ where $dim(A)$ is defined as the supremum of the set of strictly decreasing sequences of prime ideals

$$ \mathfrak{p}_r \subsetneq \mathfrak{p}_1 \subsetneq \cdots \subsetneq \mathfrak{p}_0 \subseteq A.$$

Hence with this definition, your system of equations should have a "finite set of solutions" iff $dim(A)=0$ - hence you must calculate the dimension $dim(A)$ in your case.

Given any maximal ideal $\mathfrak{m}\subseteq A$, it follows there is a maximal ideal $\mathfrak{n}\subseteq B$ mapping to $\mathfrak{m}$, and you may consider the localization $R:=B_{\mathfrak{n}}$ which is a local ring of dimension $n$. The image $g_i$ of the generators $f_i$ generates an ideal $J \subseteq R$ and if the ideal $J$ can be generated by a regular sequence $J=(a_1,..,a_i)$ with $i<n$ it follows $dim(R/J)=dim(A) >0$. Hence the set of solutions to the system is not finite. I believe there are algorithms implemented on computers that solves this problem, but I have no precise reference. In general it is difficult to check if an explicit ideal is a prime ideal, and it is also difficult to construct a regular sequence for an explicit ideal $J$.

Another more elementary approach is the Noether normalization lemma: It says that there is a set of element $t_1,..,t_d \in A$ with the property that the subring

$$S:=k[t_1,..,t_d] \subseteq A$$

is a polynomial ring, $dim(A)=d$ and $S \subseteq A$ is an integral extension.

Hence $dim(A)=0$ iff $k \subseteq A$ is an integral ring extension. Hence $dim(A)=0$ iff there is a finite set of elements $a_1,..,a_k\in A$ generating $A$ as $k$-algebra where $a_i$ satisfies a monic polynomial with coefficients in $k$. So your problem is reduced to constructing a set of integral elements $a_1,..,a_k$ generating $A$.

Lemma. The system M3 has a finite set of solutions iff $k \subseteq A$ is an integral ring extension iff $dim_k(A)< \infty$.

Proof. The first equivalence is proved above. The last equivalence follows from the fact that $A$ is finitely generated as $k$-algebra. Hence $k \subseteq A$ is an integral ring extension iff $dim_k(A)< \infty$. QED

Note 1. If you try to calculate $dim_k(A)$ "by hand" in some examples, you will find that it is a difficult problem to construct a basis for $A$ as $k$-vector space. I believe there are computer programs solving this problem. To prove that $k \subseteq A$ is not an integral extension you must find an element $t\in A$ that generates a polynomial sub ring $k[t]\subseteq A$.

Note 2. If the base field $k$ is not algebraically closed, this method detects solutions in arbitrary finite extensions $k \subseteq K$. This is the "Hilbert Nullstellensatz". For a maximal ideal $\mathfrak{m}\subseteq A$ it follows the extension $k \subseteq A/\mathfrak{m}$ is a finite extension, and a maximal ideal $\mathfrak{m}\subseteq A$ corresponds to a solution to the system M3 in the field $\kappa(\mathfrak{m}):=A/\mathfrak{m}$.

Example 1. Let $k$ be the field of real numbers and let $f:=x^2+y^2+1\in k[x,y]$ and consider the "system"

$$S1.\text{ } f(x,y)=x^2+y^2+1=0.$$

The system S1 has no real solutions but it has many (a set of dimension 1) complex solutions. Given an arbitrary real number $\theta$ and let $x:=isin(\theta), y:=icos(\theta)$ it follows

$$x^2+y^2=(isin(\theta))^2+(icos(\theta))^2=-sin^2(\theta)-cos^2(\theta)=-1$$

hence $(isin(\theta), icos(\theta))\in \mathbb{C}^2$ is a solution to S1 for any $\theta$. The field extension $\mathbb{R} \subseteq \mathbb{C}$ is finite and the dimension of the ring $A:=k[x,y]/(f)$ is 1, hence the system S1 has an infinite set of solutions (in the real and complex number field). For every $(isin(\theta), icos(\theta))$ you get a well defined surjective map

$$ \phi_{\theta}: A \rightarrow \mathbb{C}$$

defined by $\phi_{\theta}(x):=isin(\theta), \phi_{\theta}(y):=icos(\theta)$, and the kernel $\ker(\phi_{\theta})\subseteq A$ will be a maximal ideal. Hence solutions to S1 correspond to maximal ideals in $A$.

Note 3. If you work with system M3 on a computer you should know that you are studying solutions to M3 in a finite field of large characteristic. This is because a computer has a finite memory. If your system M3 is defined by polynomials with rational coefficients $\frac{a_i}{b_i}$, it follows there is a large prime $p$ not dividing any of the $b_i$'s. The computer converts the system M3 to a system over $\mathbb{F}_p:=\mathbb{Z}/p\mathbb{Z}$ (or $\mathbb{F}_{p^r}$). Hence if you ask the question

$$\text{"Does M3 have a finite set of solutions?"}$$

to a computer, the computer will answer:

$$\text{It always has a finite set of solutions since we are working over a finite field $\mathbb{F}_p$}.$$

If $k$ is a finite field and you seek solutions to M3 in $k^n$ it follows the set of solutions $S$ to M3 is a subset $S \subseteq k^n$ of $k^n$ which is a finite set, hence $S$ is trivially a finite set. In a sense:

$$\text{A computer cannot "understand infinity".}$$

Example. If the memory $M$ of your computer $HAL$ has $2^n$ elements for $n \geq 1$ an integer, it cannot "work" with the ring of integers $\mathbb{Z}$ which has a countably infinite set of elements: There is no way to "embed" $\mathbb{Z}$ as a subset of $M$ since $M$ is a finite set. There is a number $m\geq 0$ with the property that an integer $a\in \mathbb{Z}$ with more than $m$ digits cannot be stored on $HAL$. Hence if $p,q$ are two prime numbers with more than $m$ digits it follows $p,q$ and the product $pq$ cannot be calculated by $HAL$. Hence $HAL$ cannot work with very large primes. There is an infinite set of prime numbers and only a finite set of prime numbers has less than $m$ digits. Hence there is an infinite set of prime numbers that cannot be calculated by $HAL$.

Hence by "understand" I mean the following: You may always construct two prime numbers $p,q$ and ask your computer $HAL$ to calculate $pq$, and your computer will not be able to do this because it does not have enough memory.

Question: "How would you characterize when a subset of equations "coincide" (you can combine several to get another, like linear dependence in linear systems)?"

Answer: If $I:=(f_1,..,f_l)$ is an ideal generated by polynomials $f_i$ and $J:=(f_{i_1},..,f_{i_n})$ is a subset of $I$ it follows (this is a definition) $I$ and $J$ "coincide" iff they generated the same ideal in $k[x_1,..,x_n]$.