I'm taking an "Algebra for Computer Science" course, and the professor briefly touched upon an implementation of the Extended Euclidean algorithm I can't seem to understand right now.
In previous algebra courses I learnt the following:
- Use the euclidean algorithm
- Convert each equation from the form $a = qb + r$ to $r = a - qb$
- Starting from the final equation, of the form: $\gcd(a, b) = a - qb$, substitute $b$ with the previous remainder, then distribute and sum, then repeat. The procedure ends once the form described in Bézout's identity is reached. $$\gcd(a, b) = ... = ua + vb$$
The new lecture notes instead describe a constructive process, based on the following property:
$$ au + bv = m \implies m = (qb + r)u + bv = ru + b(v+qu) $$
We then write a step of Euclid's algorithm as follows:
$$ \gcd(a = qb + r, b \mid u, v) \to \gcd(r, b \mid u, v + qu) \to \gcd (b, r \mid v + qu, u) $$
Until we get to $\gcd (m, 0 \mid l_1(u, v), l_2 (u, v))$ and solve the linear system:
$$ \begin{cases} l_1(u, v) = 1 \\ l_2(u, v) = 0 \end{cases} $$
What I don't understand is the connection between these two methods (or really the intuition of why this second method works), and why this second method should be "easier" (for humans, at least; I do get that recursively constructing a $2\times 2$ matrix and solving a tiny linear system is very simple for a computer, but the professor claimed this method is simply "easier").