Intuition for $\,c\mid a,b \iff c\mid \gcd(a,b)$ and the Bezout GCD identity

The proof of the linked GCD associativity property follows immediately from this basic property:

$\rm\qquad\quad\ \ d\mid (a_1,a_2) \iff d\mid a_1,a_2\quad\ [\color{#c00}{UP} =$ Universal Property of GCD] $ $ (proof below)

Hence $\rm\ \ d\mid (a_1,a_2),a_3,a_4,\ldots\!\!\overset{\color{#c00}{UP}}\iff d\mid a_1,a_2,\,\ d\mid a_3,a_4,\ldots\!\!\iff d\mid a_1,a_2,a_3,a_4,\ldots$

Therefore $\rm\ (a_1,a_2),a_3,a_4,\ldots\,$ and $\rm\,\ a_1,a_2,a_3,a_4\ldots\,$ have the same set $\rm\,S\,$ of common divisors $\,\rm d,\,$ hence they have the same greatest common divisor $\rm (= max\ S).\quad $ QED

Hence the associativity of GCD boils down to the associativity of AND (which is implicit in above, i.e. $\rm\ d\mid a,\,b,\ldots\,$ means $\rm\,d\mid a\,$ AND $\rm\,d\mid b,\ldots\,).$


Lemma $\ \ d\mid a,b,c\iff d\mid (a,b,c)\ \ \ $ [GCD Universal Property]

${\bf Proof}\quad\ d\mid a,b,c\,\Rightarrow\, d\mid (a,b,c) = ia\!+\!jb\!+\!kc,\ $ for some $\, i,j,k\in\Bbb Z,\,$ by Bezout.

$\qquad\qquad\, d\mid (a,b,c)\Rightarrow d\mid a,b,c\ $ by $\ d\mid (a,b,c)\mid a,b,c\,$ and transitivity of $ $ "divides".

Remark $ $ See my other answer here for a conceptual proof of the Bezout identity. In more general domains the Bezout identity may not hold (e.g. $\,\Bbb Z[x]\,$ and $\,\Bbb Q[x,y])\,$ so the above proof does not work. But both are UFDs, where we can use from prime factorizations to prove the GCD universal property. The proof essentially boils down to the universal property of $\,\min,\,$ occurring in the exponents of prime powers un the unique prime factorizations of the integers, i.e.

$$p^{\large i}\mid p^{\large j},p^{\large k}\iff i\le j,k\iff i\le \min(j,k)\iff p^{\large i}\mid p^{\large\min(j,k)}\!=\gcd(p^{\large j},p^{\large k})$$

More generally, in gcd-domains the universal property is actually taken as the definition of a GCD (and dually for LCM) - follow the above link for further discussion.


I address your other questions in a separate answer.


Key Idea $\ $ Sets of positive integers closed under subtraction $(>0)$ have a special form, namely

Lemma $\ $ Suppose a set $\,\rm S\,$ of positive integers satisfies $\rm\ n, m\: \in\, S \, \Rightarrow\, n-m\, \in\, S\,$ for all $\rm \,n>m.\,$ Then every element of $\rm\:S\:$ is a multiple of the least element $\rm\:\ell\in S.$

Proof $\bf\, 1$ $\, $ If not there is a least nonmultiple $\rm\:n\in S,\,$ contra $\rm\:n-\ell \in S\:$ is a nonmultiple of $\rm\:\ell.$

Proof $\bf\, 2$ $\rm\ \ S\,$ closed under subtraction $\rm\:\Rightarrow\:S\,$ closed under remainder (mod), when it's $\ne 0,$ since mod may be computed by repeated subtraction, i.e. $\rm\: a\ mod\ b\, =\, a - k b = a\!-\!b\!-\!b\!-\cdots -\!b.\,$ Hence $\rm\:n\in S\:$ $\Rightarrow$ $\rm\: n\ mod\ \ell = 0,\:$ else it is $\rm\,\in S\,$ and smaller than $\rm\:\ell,\:$ contra mimimality of $\rm\:\ell.$


This above property yields a simple conceptual proof of Bezout's identity for the gcd.

The set $\rm\,S\,$ of all integers of the form $\rm\,a\,x + b\,y,\,\ x,y\in \mathbb Z,\,$ is closed under subtraction so, by the above Lemma, all positive $\rm\,n\in S\,$ are divisible by $\rm\,d = $ least positive $\rm\in S,\,$ so $\rm\,a,b\in S\,$ $\Rightarrow$ $\rm\,d\mid a,b.\,$ So $\rm\,d\,$ is a common divisor of $\rm\,a,b,\,$ necessarily greatest, $ $ by $\rm\,c\,|\,a,b\,$ $\Rightarrow$ $\rm\,c\mid d\! =\! a\,x\!+\! b\,y\,$ $\Rightarrow$ $\rm\,c\le d.$ Thus any common divisor of $\rm\,a,b\,$ that is of linear form $\rm\,ax+by\,$ is always a greatest one. The goal of the (extended) Euclidean algorithm is to search $\,\rm S\,$ for such a linear common divisor.

Remark $\ $ In a nutshell, two applications of induction yield the following inferences

$\rm S\ closed\ under\ {\bf subtraction} $
$\Rightarrow\:\rm S\ closed\ under\ {\bf mod} = remainder = repeated\ subtraction $
$\Rightarrow\:\rm S\ closed\ under\ {\bf gcd} = repeated\ mod\ (Euclid's\ algorithm)$

Interpreted constructively, this yields the extended Euclidean algorithm for the gcd. Namely, $ $ starting from the two elements of $\rm\,S\,$ that we know: $\rm\ a \,=\, 1\cdot a + 0\cdot b,\ \ b \,=\, 0\cdot a + 1\cdot b,\ $ we search for the least element of $\rm\,S\,$ by repeatedly subtracting elements to produce smaller elements of $\rm\,S\,$ (while keeping track of every elements linear representation in terms of $\rm\,a\,$ and $\rm\,b).\:$ This is essentially the subtractive form of the Euclidean algorithm (vs. the mod/remainder form).

Note: in more general numbers systems enjoying Division with Remainder (i.e. Euclidean domains) it is not true that $\!\bmod\!$ is equivalent to repeated subtraction, so in such rings the above descent is achieved by $\!\bmod\!$ (vs. subtraction), as in Proof $2,\,$ e.g. this is true for polynomial rings over a field.