Uniqueness of a linear map on a basis of a vector space

From Linear Algebra Done Right, 3rd edition, by Sheldon Axler:

Suppose $v_1, \ldots, v_n$ is a basis of $V$, and $w_1, \ldots, w_n \in W$. Then there exists a unique linear map $T: V \to W$ such that

$$Tv_j = w_j$$

for each $j = 1, \ldots, n.$

The theorem states two results:

  1. There exists a linear map that maps a given basis $v_1, \ldots, v_n$ of $V$ to arbitrary vectors $w_1, \ldots, w_n \in W$, and;

  2. Such a linear map is unique.

To prove uniqueness, suppose that $T \in \cal{L}$ $(V,W)$, where $\cal{L}$ $(V,W)$ is the set of all linear transformation from $V$ to $W$, and that $Tv_j = w_j$ for $j = 1, \ldots, n$. Using the homogeneity and additivity of $T$, we get:

$$T(c_1 v_1 + \ldots + c_n v_n) = c_1 T(v_1) + \ldots + c_n T(v_n) = c_1 w_1 + \ldots + c_n w_n \tag{1}$$

Thus $T$ is uniquely determined on $span(v_1, \ldots, v_n)$. Furthermore, since $v_1, \ldots, v_n$ is a basis of $V$, $T$ is uniquely determined on V.

Now to my question: I have spent quite some time attempting to internalise how $T$'s uniqueness follows from equation $(1)$. The only reasoning I could fathom, is that for any other linear map $F$ such that $v_1, \ldots, v_n \mapsto w_1, \ldots, w_n$, we have that $T(v) = F(v), \forall v \in V$, which implies the uniqueness of $T$? Does the uniqueness of a linear map simply mean that two linear maps which map a basis to the same arbitrary set of vectors must map every element of the vector space $V$ to the same element of the vector space $W$, and are therefore essentially the same linear map?

P.S. I am quite aware that there are several questios on the same topic, yet, I have found none of the answers provided to be quite sufficient in addressing my issue.


This theorem simply tells you that if you know exactly what this linear map does to your basis then you know exactly what this map does to every element of your vector space. This is quite nice since a basis tends to be considerably smaller than the overall space. For example, if we consider the linear transformation $T: V \rightarrow W$ where each vector space $V$ and $W$ is $\mathbb{R}^3$ over the field $\mathbb{R}$ then we need only know what $T$ does to each element of a basis. If, continuing with this example, our basis of the domain is the standard basis $S = \{(1,0,0), (0,1,0), (0,0,1)\}$ and we know that $$T(1,0,0) = (2,1,0), T(0,1,0) = (3,0,-1), T(0,0,1) = (-7, 1, 3)$$ then we can determine exactly where $T$ maps an arbitrary vector, say $(a,b,c)$.

To be exact, we know that $$(a,b,c) = a(1,0,0)+b(0,1,0)+c(0,01)$$ and so the assumed linearity of $T$ gives us $$T(a,b,c) = aT(1,0,0) + bT(0,1,0) + cT(0,0,1)$$ $$= a(2,1,0) + b(3,0,-1) + c(-7,1,3)$$ $$= (2a+3b-7c, a+c, -b+3c).$$

This idea here generalizes exactly to the argument in the proof that you have. The uniqueness follows by the fact that if two linear transformations agree on basis elements then they must agree on every vector (simply write an arbitrary vector as a linear combination of the basis elements and use the linearity of your maps). So, the answer to your first question is yes!

The answer to your second question is not necessarily yes. If you know that a basis is mapped to some set by two different linear transformations then you haven’t ensured that the two linear transformations have mapped the basis elements to the same places. For example, let’s consider linear transformations $T, U: \mathbb{R}^2 \rightarrow \mathbb{R}^2$ where the domain and range are assumed to be vector spaces over $\mathbb{R}$ and $$T(1,0) = (1,0), T(0,1) = (0,1), U(1,0) = (0,1), U(0,1) = (1,0).$$

Then extending each of $T$ and $U$ to an arbitrary vector (I leave the details of this small part to you) yields that given any $ (a,b) \in \mathbb{R}^2$, we have $T(a,b) = (a,b)$ and $U(a,b) = (b,a)$ and hence $T$ and $U$ are not equal as functions.

Each of $T$ and $U$ mapped the basis $S= \{(1,0),(0,1)\}$ to itself but they didn’t agree on where they sent the individual elements of $S$ so that is the distinction that I am trying to get at here. You simply need to know where your linear transformation sends each individual vector of your basis in order to know where that transformation sends ANY vector of its domain.


Let $T'$ be any other linear mapping with $T'(v_j)=w_j\,,j=1,\dots,n$.

Let $x$ be an arbitrary vector in $V$. Then $x=x_1v_1+\dots+x_nv_n$, for unique scalars $x_1,\dots,x_n$, since $v_1,\dots,v_n$ is a basis.

Now $T'(x)=T'(x_1v_1+\dots+x_nv_n)=x_1T'(v_1)+\dots+x_nT'(v_n)=x_1w_1+\dots+x_nw_n=x_1T(v_1)+\dots+x_nT(v_n)=T(x_1v_1+\dots+x_nv_n)=T(x)$.

Thus $T'=T$.


It follows from the equality $(1)$ because this equality tells us that, for each $v\in V$, the value of $T(v)$ will be equal to $c_1w_1+\cdots+c_nw_n$, where the numbers $c_1,\ldots,c_n$ are the coefficients of the expression $v=c_1v_1+\cdots+c_nv_n$. So, since the $c_k$'s are unique, so is $T(v)$.

Concerning your final question: no, they are not essentially the same map. They are exactly the same map.


In general, functions $f,g$ are equal if for any $x$ in the domain, $f(x) = g(x)$. Linearity has the special property that once the mapping of basis vectors are specified, the mapping of any vector is then specified by equation (1). Hence, if two linear functions map the same set of basis vectors in the same way, this implies that they are equal.

Note that there is no separate definition of equality among linear maps. This theorem simply gives a sufficient condition for equality defined above for linear maps.