Understanding determinant $=0$
I am playing around with determinants to see if I can get a better grasp of it and would appreciate some thoughts.
$$A=\begin{pmatrix}a_1&b_1&c_1\\a_2&b_2&c_2\\a_3&b_3&c_3\end{pmatrix}$$ Column vectors $\vec{a},\vec{b},\vec{c}$
If det(A)=0, then $\vec{a},\vec{b},\vec{c}$ are linearly dependent and can be written $\alpha\vec{a}=\beta\vec{b}+\gamma\vec{c}$. It also means that $A\vec{x}=0$ has a non trivial solution $$\vec{x_0}=\begin{pmatrix}\alpha\\-\beta\\-\gamma\end{pmatrix}$$ and all vectors $t\vec{x_0},\ t\in R$
Suppose we have an equation $A\vec{x}=\vec{y}$, with $\vec{x_1}$ as a solution, then all $\vec{x}=\vec{x_1}+t\vec{x_0}$ are also solutions. This explanations makes it clear in my mind why, if there is one solution, there are infinite solutions in any dimension when det(A)=0.
Suppose $\alpha\ne0$, one of $\alpha,\beta,\gamma$ must be non zero. We could then describe $$A=\begin{pmatrix}\beta_1b_1+\gamma_1c_1&b_1&c_1\\\beta_1b_2+\gamma_1c_2&b_2&c_2\\\beta_1b_3+\gamma_3c_1&b_3&c_3\end{pmatrix}, \beta_1=\frac{\beta}{\alpha},\gamma_1=\frac{\gamma}{\alpha}$$
Then $$A\vec{x}=\vec{y}\Leftrightarrow$$ $$(\beta_1x_1+x_2)\vec{b}+(\gamma_1x_1+x_3)\vec{c}=\vec{y}$$
This is an equations system with two vectors in three dimensions describing a third, which can only lie in the plane the two vectors,$\vec{b},\vec{c}$, span.
This makes sense in two and three dimensions, but the thing I am having trouble visualising is why you need an equal number of linearly independent vectors to the number of the dimension to describe all vectors in higher than three dimensions. It makes sense, but in a ephemeral way that is not completely satisfying to me.
When it comes to higher than three dimensions I start to view the equation $A\vec{x}=\vec{y}$ as an equations systems instead of geometric representations. So that is where my thoughts go. You need an equal number of equations to variables to have a possible single solution. But how to state this cleanly and intuitively?
Edit:
I thing the solution I am seeking is something like: if you have less vectors than the number of dimensions or if the vectors are linearly dependent, then you can not use Gaussian elimination to ever get an overly triangular matrix without zeros in the diagonal.
Solution 1:
For linear maps : You can think of reach, instead of trying to imagine spaces. Since you want some intuition, i will walk you through the way i have of seeing the question.
Check if the following makes sense to you for linear maps :
- The dimension of your image cannot be bigger than the dimension of your starting space.
- In matrix notation,the number of lines corresponds to the dimension of your end space (where the images live). The number of columns corresponds to the dimension of your starting space.
- The number of equations is "the same" as the number of lines you have. Variables can just be interpreted as the coefficients which say how much each basis vector (in the starting space) contributes, and you have as many basis vectors as the dimension of your space.
- If your map is surjective, you can reach every vector in the end space (hence also every vector in the image).
- If your map is injective, you can only reach every vector once. Now think of a map that isn't injective: by dimensionality, if your starting space is "bigger" (has a larger dimension), then if you start maping every basis vector in your start space to some in the image space, then you will have exhausted the basis vectors in your image space before those in your start space (some have to go to waste (injective $\implies$ Ker$(A) = 0$)).
- $Ax=y$ means $y$ lies in the image of A. If $y$ isn't in the image of $A$, then there is no solution.
Now putting all of this together :
-
If $y$ lives in the image of $A$, then it has to lives in a subspace which has a smaller dimension than the starting space (because of 1.). Now that we know that, because of 2. the number of linearly independent lines (which span the image space) must be smaller or equal to the number of columns, which because of 3. is the number of variables. Therefore, in order to have a solution, your image space needs to have at most the number of dimensions of your starting space, which because of what i have just written means means the number of linearly independent lines can be at most the same as the number of variables.
-
On the other hand, to have a unique solution, you need to make sure that you only reach your image vector once, which because of 5. means you cannot let any vector go to waste, which itself means that you need to have at most the same number of basis vectors in your starting space, as you have basis vectors in your end space. By 2., that means that the number of columns can at most be the same number as the number of lines. Using 3., that translates into having at most as many variables as you have lines.
Finaly, If you want a unique solution, you can see that you want both of the points to be simultaneously satisfied.
That's a long one, welcome inside of my head.
Now, regarding the determinant. A good way of gaining intuition, is to think about determinant as telling you how your transformation deforms your space. Now, in 2 dimensions, you can see it as affecting area. (Technicaly, a surface is something that lives one dimension lower than your space (And you systematicaly think about $\mathbb{R}^2$ as a surface embeded in $\mathbb{R}^3$, and it feels strange to talk about volume in $\mathbb{R}^2$ or $\mathbb{R}$). But a more rigorous way is to say that the determinant tells you how your transformation changes (and if it's turned inside out) volume.
Now, a surface has no volume ! And intuitively, we can say something like objects that have the same dimension as your space have volume, right ? (like a sphere is 3 dimensional and has volume in $\mathbb{R}^3$, but not in $\mathbb{R}^4$ ...)
You need same start and endspace dimensions for your map in order to compute your determinant (ie. square matrice). Now,if you map isn't injective, using 2. and 5., some vectors have to go to waste. That means that your image has a lesser dimension, so the imagespace can't have a volume anymore, since it's an object that has a lesser dimension than your space. Hence, it makes volume disappear, therefore det is 0.
P.S : Dimension of the image space is not the same as the number of lines. Adding lines adds dimension to the endspace, but not the image space. That means that the only lines that matter to solving your equation are the linearly independent ones. :)