difference between linear transformation and its matrix representation .
The matrix T represents the linear transformation $T$ subject to the choice of an ordered basis for each the domain and the codomain. Change the bases and you induce a corresponding change in how the linear transformation is represented.
But the linear transformation $T:\mathbb{R}^n \to \mathbb{R}^m$ is the same regardless of what basis you have in mind.
Often we want to pick a basis (or bases) so that the matrix T will be especially nice, say a diagonal or block diagonal form, or some other sparse representation (lots of zero entries).
But again, changing basis will affect only the matrix representation, not the linear transformation itself.
The case where $n=m$ is a little special in that ordinarily we prefer to use the same basis for the domain ("input") and the codomain ("output"). It is not mandatory to do so, but it imposes an extra burden on memory to keep track of whether $\mathbb{R}^n$ is being used for the input or the output, and think in terms of the appropriate coordinate system for the purpose.
To simplify matters we will give an example of representing a linear transformation $T:\mathbb{R}^2 \to \mathbb{R}^2$ that uses the same basis for both. First consider the standard ordered basis for $\mathbb{R}^2$:
$$ \mathscr{E}_2 = [(1,0),(0,1)] $$
Note that I'm using square brackets, rather than a pair of curly brackets, to emphasize that we chose an ordering of the basis vectors. Curly brackets would mean a set, and the set does not entail any special order of its elements.
Consider now a linear transformation $T:\mathbb{R}^2 \to \mathbb{R}^2$ defined by swapping the $x,y$ coordinates:
$$ T(x,y) = (y,x) $$
With respect to the standard basis we have a matrix $M$ such that multiplication by $M$ converts the standard coordinates of input vector $(x,y)$ as a column vector into standard coordinates of output vector $T(x,y) = (y,x)$ as a column vector:
$$ \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} y \\ x \end{bmatrix} $$
It's important to understand how the column vectors represent an input/output vector with respect to our chosen basis. Every vector can be represented uniquely as a linear combination of basis vectors, so in the case of the standard basis we have:
$$ (x,y) = x\cdot (1,0) + y\cdot (0,1) $$
But for the sake of making matrix multiplication work, we take those scalar multipliers and create a column vector with them as entries corresponding to the order of the respective basis vectors. In standard coordinates we represent $(x,y)$ as $\begin{bmatrix} x \\ y \end{bmatrix}$ and $(y,x)$ as $\begin{bmatrix} y \\ x \end{bmatrix}$. Thus the multiplication by $M$ shown above converts the standard coordinates for $(x,y)$ to the standard coordinates for $(y,x)$.
Now let's try this with a "nonstandard" basis. Sometimes a better basis for representing a transformation may be as simple as just changing the order of the vectors in an existing basis, but we are going to do a more complicated example just to contrast steps that are easy from steps that are harder.
Choose nonstandard basis $\mathscr{B} = [(1,2),(3,4)]$. To verify this is a basis for $\mathbb{R}^2$, we only need to point out that the two vectors are linearly independent (neither is a multiple of the other, since there are only these two), and since the count (two vectors) matches the dimension (two), it must be a basis.
What we want is to find a $2\times 2$ matrix T that represents $T$ with respect to this nonstandard basis. That is, multiplying the column vector of nonstandard coordinates for an input $(x,y)$ should give us the column vector of nonstandard coordinates for its output $T(x,y)=(y,x)$.
The recipe I recommend is to make use of what we already know, the matrix $M$ which represents $T$ in standard coordinates. Specifically:
(1) Find a matrix $B$ whose multiplication converts columns of nonstandard coordinates into columns of standard coordinates.
(2) Find a matrix whose multiplication converts columns of standard coordinates into columns of nonstandard coordinates, name the inverse $B^{-1}$ of the matrix found above.
Then T = $B^{-1} M B$, because multiplying by T accomplishes the properly sequenced operations to go from nonstandard coordinates of an input to nonstandard coordinates of its output.
Framing things in this way makes the exercise very tractable, because matrix $B$ turns out to be something we can write down "by inspection". Suppose that a vector $(x,y)$ has nonstandard coordinates $\begin{bmatrix} u \\ v \end{bmatrix}$. This simply means:
$$ (x,y) = u\cdot (1,2) + v\cdot (3,4) $$
If we write down what this means in standard coordinates:
$$ \begin{bmatrix} x \\ y \end{bmatrix} = u\cdot \begin{bmatrix} 1 \\ 2 \end{bmatrix} + v\cdot \begin{bmatrix} 3 \\ 4 \end{bmatrix} $$
it becomes evident that $B = \begin{bmatrix} 1 & 3 \\ 2 & 4 \end{bmatrix}$, with the columns of $B$ formed by standard coordinates of our nonstandard basis vectors:
$$ B \begin{bmatrix} u \\ v \end{bmatrix} = \begin{bmatrix} 1u+3v \\ 2u+4v \end{bmatrix} $$
To carry out our construction of T as $B^{-1} M B$, we will need to invert $B$, and this is certainly more effort than it took right down $B$ "by inspection". I leave it to the Reader to verify that applying the usual trick for $2\times 2$ matrices:
$$ B^{-1} = \frac{-1}{2} \begin{bmatrix} 4 & -3 \\ -2 & 1 \end{bmatrix} $$
Finally we get (if I've not made a mistake) that T is:
$$ \frac{-1}{2} \begin{bmatrix} 4 & -3 \\ -2 & 1 \end{bmatrix} \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 1 & 3 \\ 2 & 4 \end{bmatrix} = \begin{bmatrix} -\frac52 & -\frac72 \\ \frac32 & \frac52 \end{bmatrix} $$
As a quick check, consider a nonstandard basis vector $(1,2)$, whose nonstandard coordinates are $\begin{bmatrix} 1 \\ 0 \end{bmatrix}$. Multiplying by T gives the first column of T, namely $\begin{bmatrix} -\frac52 \\ \frac32 \end{bmatrix}$, which should be the nonstandard coordinates of $T(1,2) = (2,1)$. Sure enough:
$$ (2,1) = -\frac52\cdot (1,2) + \frac32\cdot (3,4) $$
Every linear transformation $T:V^{(n)}\rightarrow W^{(m)}$ can be represented, with respect to two bases $\beta\in V$ and $\gamma \in W$, as a matrix $A_T$ of size $m\times n$. For each vector $v\in V$ and $w\in W$, their coordinates in respective bases are written in column vectors as $[v]^{\beta}\in \mathbb{R}^n$ and $[w]^{\gamma}\in \mathbb{R}^m$, and the linear transformation $u = T(v)$ is represented as a matrix multiplication $[u]^{\gamma} = A_T[v]^{\beta}$.
Conversely, every matrix $A_{m\times n}$ can be viewed as a linear transformation $T_A:\mathbb{R}^n\rightarrow \mathbb{R}^m$; indeed, all the matrix properties are derived from properties of linear transformations! As such, the matrix $A$ in the equation $y = Ax$ is by itself a linear transformation without the need of specifying the bases. (But you can always pick any two bases $\beta,\gamma$ and derive its "matrix-representation"; the simplest ones are the standard bases of $\mathbb{R}^n,\mathbb{R}^m$, in which $[A]_{\beta}^{\gamma}$ = A)