What is Cramer's rule used for?
Cramer's rule appears in introductory linear algebra courses without comments on its utility. It is a flaw in our system of pedagogy that one learns answers to questions of this kind in courses only if one takes a course on something in which the topic is used.
On the discussion page to Wikipedia's article on Cramer's rule, we find this detailed indictment on charges of uselessness, posted in December 2009.
But in the present day, we find in the article itself the assertion that it is useful for
- solving problems in differential geometry;
- proving a theorem in integer programming;
- deriving the general solution to an inhomogeneous linear differential equation by the method of variation of parameters;
- (a surprise) solving small systems of linear equations. This one is what it superficially purports to be in linear algebra texts, but then elementary row operations turn out to be what is actually used.
At some point in its history, the Wikipedia article asserted that it's used in proving the Cayley–Hamilton theorem, but that's not there now. To me the Cayley–Hamilton theorem has always been a very memorable statement, but at this moment I can't recall anything about the proof.
What enlightening expansions on these partial answers to this question can the present company offer?
One place that Cramer's rule is often useful from a theoretical point of view is that if a square matrix $M$ with entries in a commutative ring $R$ has ${\rm det}(M)$ invertible in $R,$ then the inverse $M^{-1}$ still has entries in $R$. For example, a square matrix with integer entries has an inverse with integer entries if and only if its determinant is $\pm 1.$ I do not think this is so obvious if we pass to elementary row operations in the field of fractions of $R$ (in the case $R$ is an integral domain). Although in any (correct) computation, an integral inverse will emerge at the end if the determinant is a unit of the ring.
Cramer's rule is helpful when proofing "Jacobi's formula", a useful matrix derivation identity: $$\frac{\mathrm{d}}{\mathrm{d}t} \det A(t) = \operatorname{tr} \left (\operatorname{adj}(A(t)) \, \frac{\mathrm{d}A(t)}{\mathrm{d}t}\right )$$
For further information, have a look at Wikipedia. Jacobi's formula is used in vector analysis for example for the divergence theorem. On page 2 of this document about the divergence theorem, you can see how it is done in detail.
ADDED LATER (may 2019):
Another example for the application of Cramer's rule - or at least the formula about its inverse - appears as well in calculus of variations for proving that the determinant has divergence structure and hence one can pass on weak limits. This is rather magical as it is not expected and it is most certainly not an introductory example.
Namely, the formula
$$\mathrm{det}(A) A^{-1} = \mathrm{cof}(A)^{T},$$
with $\mathrm{cof}(A)$ being the cofactor matrix and $T$ denoting the transpose, is the starting point of the proof of this very helpful lemma and it is stated as:
Assume $n < q < \infty$ and $u_k \rightharpoonup u$ weakly in $W^{1,q}(U;\mathbb{R}^n)$.
Then $$\mathrm{det}(Du_k) \rightharpoonup \mathrm{det}(Du) \text{ weakly in } L^{\tfrac{q}{n}}(U).$$
This lemma and its proof can be found in Evans' book "Partial Differential Equations" in Chapter 8.2.4.