Is $1 = [1] = [[1]] = [[[ 1 ]]], \ldots$?

Is the following true? Where [a] is a 1x1 matrix containing the object a.

$$ \begin{bmatrix} 2 \end{bmatrix} =\\ \begin{bmatrix} \begin{bmatrix} 2 \end{bmatrix} \end{bmatrix} =\\ \begin{bmatrix} \begin{bmatrix} \begin{bmatrix} 2 \end{bmatrix} \end{bmatrix} \end{bmatrix}\\ \vdots $$

I am curious because I am writing a function for adding matrices, but there is no rule forbidding elements of matrices to be matrices; so I want to know if

$$ \begin{bmatrix} \begin{bmatrix} \begin{bmatrix} 2 \end{bmatrix} \end{bmatrix} \end{bmatrix} + \begin{bmatrix} \begin{bmatrix} 2 \end{bmatrix} \end{bmatrix} = 4 $$

after canonicalization or not. I also want to know if this holds for direct sum:

$$ \begin{bmatrix} \begin{bmatrix} \begin{bmatrix} 2\\ 4 \end{bmatrix} \end{bmatrix} \end{bmatrix} + \begin{bmatrix} \begin{bmatrix} 2\\ 4 \end{bmatrix} \end{bmatrix} = \begin{bmatrix} 4\\ 8 \end{bmatrix} $$

I am aware that this messes up the convenient "indexing" property, as A[0] will no longer make sense; but matrices don't require to support this property anyway, we just have it for free for implementing them as arrays, so it's a nonissue.

I am curious if there is any particular harm to the algebra if 1 = [1] = [[1]] = [[[1]]] that makes it unusable, or if it just causes minor nuisance?

Breakages so far/severity:

  1. Indexing of a matrix(minor)

[3] at 0 is 3, but 3 at 0 is undefined(since indexing is not defined for scalars), and it is not clear whether [[3; 4]] at 0 is [3; 4] which is what it would be in current algebra, whereas it would be 3 in proposed algebra.

This is not very severe since matrices are not really arrays, but bilinear maps, so indexing them is fairly naive to begin with. This property won't work unless the given matrix is canonicalized before being indexed.

  1. Multiplication of matrix by scalar(minor/real damage).

There are some concerns over whether or not multiplication by scalar is invalid, but my previous argument is invalid. so I removed it.


The question has evolved since my initial answer, so now I offer three:


Answer #1, to the question "Is there a difference between $3$ and $[3]$?":

The answer is yes.

$3$ is the number three.

$[3]$ is a nice wooden box, lined with velvet, holding the number three.

$[[3]]$ is a nice wooden box, lined with velvet, containing a smaller wooden box.

We have $2+3=5$, because we know how to add two numbers. We cannot calculate $[2]+3$, as we do not know how to add a number to a nice wooden box. It turns out that $[2]+[3]=[5]$, because we have special rules for adding two wooden boxes.


Answer #2, to the question "Can we treat $3$ and $[3]$ as the same?"

We can, except that this will not be standard mathematics anymore. Other answers (and comments) have pointed out things that break. Not everything breaks, true.

We could, similarly, treat $3$ and $7$ as the same. This doesn't break all of mathematics, e.g. $2+2=4$ remains true; however, now $7-3=0$, and all sorts of craziness flows out. Following this to its natural conclusion ends up breaking a lot. To avoid this, we could just throw out the parts that break, leaving a new type of mathematics (considerably smaller now) in which $3=7$. This brings us to the third question and answer.


Answer #3, to the question "Should we treat $3$ and $[3]$ as the same?"

The answer is no -- we should not do this -- unless we have a clear benefit from doing so. Even if there is such a benefit, it would have to be weighed against the cost of what we give up with this new mathematics.

The only benefits I personally see are (a) a certain clarity of understanding, and (b) a certain simplicity in writing software implementations. However, (a) is a very small benefit for the price of setting 3=7, and in my view the clarity is actually categorical confusion. As for (b), it is similarly only an illusion; all that is gained is the ability to generate crashed programs very easily.


The ring of $1 \times 1$ real matrices is indeed isomorphic to the real numbers, but is not "the same thing". That said, it's often convenient and rarely confusing to identify them. For example, you can think of the dot product of two vectors (which is a real number) as the matrix product of a $1 \times n$ matrix with an $n \times 1$ matrix, which is a $1 \times 1$ matrix.