Quotient field operations are well-defined: fleshing out Vinberg's sketch

Let $A$ be a non-trivial integral domain. Define the relation $\sim$ on the set of pairs $A \times A\setminus\{0_A\}$ as follows:

$$(a_1,b_1) \sim (a_2,b_2) \overset{\text{def}}{\Longleftrightarrow} a_1b_2=a_2b_1.$$

It turns out that $\sim$ is an equivalence relation on $A \times A\setminus\{0_A\}$. Addition and multiplication procedure is defined as follows.

$$(a_1,b_1)+(a_2,b_2) \overset{\text{def}}{=} (a_1b_2+a_2b_1,b_1b_2)\\(a_1,b_1)\cdot(a_2,b_2)\overset{\text{def}}{=}(a_1a_2,b_1b_2).$$

If one wishes to define such operations similarly on the set of equivalence classes by $\sim$, that is on the set $(A \times A\setminus\{0_A\})/\!\sim$, one must prove the operations agree with the relation $\sim$. In other words, it must be shown these procedures give a well-defined function, not depending on the choice of representative from an equivalence class.

Here is how I would prove the result in the case of addition.

Let $(a,b)\sim(a_1,b_1)$ and $(c,d) \sim (c_1,d_1)$ be any pairs in $A \times A\setminus\{0_A\}$. We need to show that $(a,b)+(c,d)$ is $\sim$-equivalent to $(a_1,b_1)+(c_1,d_1)$, that is $(ad+bc)b_1d_1 = (a_1d_1+b_1c_1)bd.$

Hence, look at the expression $E:=(ad+bc) b_1d_1$. Using distributivity in $A$, we have $E=(ad)b_1d_1+(bc)b_1d_1$. Using commutativity (and associativity) of multiplication, $E=(ab_1)dd_1+(cd_1)bb_1$. But because $(a,b)\sim(a_1,b_1)$ and $(c,d) \sim (c_1,d_1)$, we may replace $ab_1=a_1b$, and $cd_1=c_1d$. Therefore, $E=(a_1b)dd_1+(c_1d)bb_1$. Again via distributivity (and commutativity, associativity), finally $E=(a_1d_1+b_1c_1)bd$. QED


Here is how E. B. Vinberg does it in A Course of Algebra, page 130.

Define now addition and multiplication of pairs by the following rules: $$(a_1,b_1)+(a_2,b_2) = (a_1b_2+a_2b_1,b_1b_2)\\(a_1,b_1)(a_2,b_2)=(a_1a_2,b_1b_2).$$ We will prove that the equivalence relation defined above agrees with these operations. By the preceding discussion, it suffices to show that when we multiply both entries in one of the pairs $(a_1,b_1)$ or $(a_2,b_2)$ by the same element $c$, their sum and product get replaced by equivalent pairs. But it is clear that when we do this, both entries in the sum and the product are multiplied by $c$.

(Emphasis added by me).

Q: Why does it suffice to show only what Vinberg says?

To emphasise, "the preceding discussion" is quoted in either my previous question in yellow quote boxes, or here in this post. The order of the book is preserved. I thought it would be a poor idea to again quote the full passage here due to length. Of course, I am willing to do so if necessary; in such a case, please leave an appropriate comment.


Solution 1:

Recall that the scaling relation $\,\sim:\,$ is defined as $\, (a,b) \sim: (c,d)\iff (c,d) = (ea,eb)\,$ for some $\,e\neq 0,\,$ i.e. $\,\large \frac{a}b \sim: \frac{e\,a}{e\,b}.\,$ They have equal cross-multiples $\,eab\,$ so $\,f\sim:g\,\Rightarrow\, f\sim g.$

The Lemma in the prior question shows that every cross-multiplication equivalence $\,f_1\sim f_2\,$ can be decomposed into a pair of scaling relations, i.e. $\,f_1\sim f_2\iff f_1\sim:f:\sim f_2\ $ for some $\,f,\,$ i.e. $\,f_1,\,f_2\,$ are cross-multiplication equivalent $\iff$ they have a common scaling $\,f.\,$

Thus it suffices to prove that addition and multiplication are compatible with the scaling relation, which follows from scaling symmetry of the addition & multiplication formulas due to their linear form, i.e. $\, s(f_1)\sim: \color{#c00}e\,s(f_1) = s(\color{#c00}ef_1) = s(f)\,$ below, where we prove compatibility for the first argument of addition using the sum function$\ s(x) := x + g_1,\, $ for $\,g_1 = (c,d).$

$\ \ \ \ \ \ \ \begin{align}f_1 + g_1\ \ \ \ \ &\sim: \ \ \ \ \ f + g_1 \\[.2em] f_1 \ \ \ \sim:\ \ \ \ f \ \ \ \ \, \smash[t]{\color{#0a0}{\overset{\rm C}\Longrightarrow}}\, \ \ \ \ \ \ \ \ s(f_1)\ \ \ \ \ \ \ & \sim:\ \ \ \ \ \ \ s(f)\\[.2em] \ {\rm i.e.}\ \ \ \ (a,b)\sim:(ea,eb)\,\Rightarrow\, (a,b)+(c,d)&\sim: (\color{#c00}ea,\color{#c00}eb)+(c,d)\ \ = \ s(\color{#c00}ef_1) \\[.2em] {\rm by}\ \ \ \ (ad\!+\!cb,\,bd) &\sim: (\color{#c00}ead\!+\!\color{#c00}ecb,\,\color{#c00}ebd)\ \ = \ \color{#c00}e\,s(f_1) \end{align}\ \ \ \ \ \qquad$

${\rm Then}\ \ f_1\sim f_2\,\Rightarrow\, s(f_1)\sim s(f_2)\,$ follows by applying $\,\smash[t]{\color{#0a0}{\overset{\rm C}\Rightarrow}}\,$ to a $\,\sim:\,$ decomposition of $\, f_1 \sim f_2\,$

$\ \ \ \ \ \ \ \ \ \, f_1\sim f_2\,\Rightarrow\begin{align}f_1\sim: f\\[.2em] f_2\sim: f\end{align}$ $\:\color{#0a0}{\overset{\rm C}\Rightarrow}\,\begin{align}s(f_1)\sim: s(f)\\[.2em] s(f_2)\sim: s(f)\end{align}$ $\,\Rightarrow\begin{align}s(f_1)\sim s(f)\\[.2em] s(f_2)\sim s(f)\end{align}$ $\,\color{#08ff}\Rightarrow\! \begin{align} s(f_1)\,&\sim\, s(f_2),\,\ {\rm i.e.}\\[.2em] f_1+g_1&\sim \color{#08f}{f_2+g_1}\end{align}$

Similarly (or using symmetry and commutativity) we get $\ g_1\sim g_2\,\Rightarrow\, \color{#08f}{f_2+g_1}\sim f_2+ g_2\,$ thus

$\rm\color{#08f}{transitivity}$ of $\,\sim\,$ yields $\,\ \ f_1\sim f_2,\ g_1\sim g_2\,\Rightarrow\, f_1+g_1\sim f_2+g_2\qquad $

which means $\,\sim\,$ is compatible with addition. Multiplication compatibility follows similarly.

Remark $ $ These tedious proofs are usually "left to the reader" in most expositions. One can avoid this by instead using a more algebraic construction of fraction rings via quotients of polynomial rings, where we adjoin an inverse $\,x_a\,$ for each $\,a\neq 0\,$ via extension rings $\, A_j[x_a]/(ax_a-1).\,$

In this approach the proofs follow immediately from universal properties of polynomial and quotient rings. The two approaches are related by the fact that the fraction pairs correspond to normal forms in these quotients rings, where every element is equivalent to a monomial $\,a\, x_{a_1}\cdots x_{a_k}\,$ (essentially by choosing a $ $ common "denominator"), $ $ denoted by the $ $ "fraction" $\,a/(a_1\cdots a_k)\,$ or, set-theoretically, by the pair $\,(a,\,a_1\cdots a_k),\,$ analogous to Hamilton's pair-representation of complex numbers $\,(a,b),\,$ corresponding to normal forms (least degree reps) $\,a+bx\,$ in $\,\Bbb R[x]/(x^2\!+1)\cong C.\,$ For more on this viewpoint see here (there we consider a more general construction (localization) which inverts elements in some specified subset $\,S\subseteq A)$

Solution 2:

Vinberg implicitly defines a relationship which we'll call $\sim_1:$

$(a_1,b_1)\sim_1 (a_2,b_2)$ if $\exists c\in A\setminus \{0\}$ such that $a_1c=a_2,b_1c=b_2.$

This is not an equivalence relation. ($\sim_1$ is actually a pre-order.)

Vinberg shows in the prior discussion that $\sim_1$ has the property:

Lemma 1: If $(a_1,b_1)\sim_1(a_2,b_2)$ then $(a_1,b_1)\sim (a_2,b_2)$

and also the property:

Lemma 2: $(a_1,b_1)\sim (a_2,b_2)$ if and only if there exists $(a_3,b_3)$ such that $(a_1,b_1)\sim_1 (a_3,b_3)$ and $(a_2,b_2)\sim_1 (a_3,b_3).$

Those two properties are the key.

Now Vinberg is saying we only need to show:

Lemma 3: For $p\sim_1 p_1$ and any $q$ that: $$\begin{align}p+q&\sim p_1+q\text{ and }\\ q+p&\sim q+p_1\end{align}\tag{1}$$

and similarly for multiplication.

From Lemma 3 we prove the general case:

Theorem: If $p\sim p_1$ and $q\sim q_1$ then $p+q\sim p_1+q_1.$

Proof: By Lemma 2, there must have $p_2,q_2$ such that $p\sim_1 p_2, p_1\sim_1 p_2, q\sim_1 q_2, q_1\sim_1 q_2.$

Then we have: $$p+q\sim p_2+q\sim p_2+q_2$$ by (1), and so $p+q\sim p_2+q_2.$

Likewise, we have $p_1+q_1\sim p_2+q_2.$

So we've shown: $p+q\sim p_1+q_1.$

The same works for multiplication.


It is easier to show the stronger statement:

For $p\sim_1 p_1$ and any $q$, $$\begin{align}p+q&\sim_1 p_1+q\text{ and }\\ q+p&\sim_1 q+p_1,\end{align}\tag{1'}$$

and then deduce Lemma 3 from (1') using Lemma 1.