shortcut for finding a inverse of matrix

Solution 1:

For a 2x2 matrix, the inverse is: $$ \left(\begin{array}{cc} a&b\\ c&d \end{array}\right)^{-1} = {1 \over a d - b c} \left(\begin{array}{rr} d&-b\\ -c&a \end{array}\right)~,~~\text{ where } ad-bc \ne 0. $$

just swap the 'a' and 'd', negate the 'b' and 'c', then divide all by the determinant $a d - b c$.

That's really the most straightforward 'trick', just memorize that pattern.

For 3x3, it's lot more complicated but there is a pattern. As usual compute the determinant first (kind of a pain; but surely you already know the pattern to compute that quickly).

$$ \left(\begin{array}{ccc} a&b&c\\ d&e&f\\ g&h&i \end{array}\right)^{-1} = {1 \over {\rm{det}}} \left(\begin{array}{rrr} e i - f h&-(b i - c h)&b f - c e\\ -(d i - f g)&a i - c g&-(a f -c d)\\ d h - e g&-(a h - b g)&a e - b d \end{array}\right). $$ The pattern is that each entry is

  • the determinant of the submatrix gotten by removing that row and column. I.e. for row 2 column 3 (from $f$'s position), the determinant is $a h - b g$: $$ \det\left(\begin{array}{cc} a&b\\ g&h \end{array}\right) = a h - b g $$

  • then multiply in the checkerboard pattern. (i.e.1x1 is positive, 1x2 is negative... mathematically it's multiply by $(-1)^r(-1)^c$.

  • Then transpose.

See? There's a pattern, but I feel it's about the same symbolic complexity as just doing it brute force Gaussian-elimination style.

Solution 2:

Your best bet is the Gauss-Jordan method.

Solution 3:

So we want to find out a way to compute $2 \times 2 ~\text{ or }~ 3 \times 3$ matrix systems the most efficient way. Well I think the route that we want to go would be to use Cramer's Rule for the $2 \times 2 \text{ or } 3 \times 3$ case. To state the $2 \times 2$ case we will use the following:

For some coefficient matrix A= $\left[ \begin{array}{rr} a & b \\ c & d \end{array} \right]$

$A^{-1}=\dfrac{1}{ad-bc} \cdot \left[ \begin{array}{rr} d & -b \\ -c & a \end{array} \right]~ \iff ad-bc \ne 0$ $~~~~~~~~~\Big($i.e., Det(A)$~\ne ~ 0\Big)$

For the $3 \times 3$ case, we will denote that as the folllowing:

$x_{1} = \dfrac{|b~~x_{2}~~x_{3}|}{|\bf{A}|}~~,$

$x_{2} = \dfrac{|x_{1}~~b~~x_{3}|}{|\bf{A}|}~~,$

$x_{3} = \dfrac{|x_{1}~~x_{2}~~b|}{|\bf{A}|}.$

This comes from the matrix equation: ${\bf{A\vec{x}}}={\bf{\vec{b}}},~~~$ where $\vec{x}=[x_{1}~~x_{2}~~x_{3}]^{T}$.

For the elements of matrix $A = \left|\begin{array}{rrr} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{array} \right|,~~$ it can be extended for the solutions $x_{1},~x_{2},~x_{3}$

as so knowing that ${\bf|{A}| =} ~ |a_{ij}| ~ \not= ~ 0.$

$x_{1} = \dfrac{1}{|{\bf{A}}|} \left|\begin{array}{rrr} b_1 & a_{12} & a_{13} \\ b_2 & a_{22} & a_{23} \\ b_3 & a_{32} & a_{33} \end{array} \right|$,

$x_{2} = \dfrac{1}{|{\bf{A}}|} \left|\begin{array}{rrr} a_{11} & b_1 & a_{13} \\ a_{21} & b_2 & a_{23} \\ a_{31} & b_3 & a_{33} \end{array} \right|$,

$x_{3} = \dfrac{1}{|{\bf{A}}|} \left|\begin{array}{rrr} a_{11} & a_{12} & b_1 \\ a_{21} & a_{22} & b_2 \\ a_{31} & a_{32} & b_3 \end{array} \right|$.

An alternate way of doing this would be using row reducing methods, known as either Gaussian Elimination( ref ) or Gauss-Jordan Elimination( rref ).

I hope this helped out. Let me know if there if anything you do not understand.

Thanks.

Good Luck.