Understanding why the roots of homogeneous difference equation must be eigenvalues
You have a mistake in your expansion of the characteristic polynomial, it should be $\lambda^3-2 \lambda^2-5 \lambda +6$.
To see the connection between this polynomial and the matrix $A_0$, it helps to reduce the difference equation down a first order equation in many variables.
Let $x_k^1 = y_k, x_k^2 = y_{k+1}, x_k^3 = y_{k+2}$. Then the difference equation becomes $$x_{k+1}^1 = x_k^2$$ $$x_{k+1}^2 = x_k^3$$ $$x_{k+1}^3 = -6 x_k^1 + 5 x_k^2 + 2 x_k^3,$$ or, in matrix terms, with $x_k = (x_k^1,x_k^2,x_k^3)^T$: $$ x_{k+1} = A x_k = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ -6 & 5 & 2 \end{bmatrix} x.$$ Note the connection between the characteristic polynomial coefficients and the bottom row of the matrix (this is called the controllable canonical form in control circles). If you work out the eigenvalues of $A$ you will find that they are $\{-2,1,3\}$. In fact, the characteristic polynomial of $A$ is $\lambda^3-2 \lambda^2-5 \lambda +6$. Hence $A$ is diagonalizable by some matrix $V$, and you have $A_0 = V^{-1} A V$, where $A_0$ has the form above.
They share the same characteristic polynomial because $\det (\lambda I - A_0) = \det (\lambda I - V^{-1} A V) = \det V^{-1} \det (\lambda I - A ) \det V = \det (\lambda I - A)$.
Relevant links: http://en.wikipedia.org/wiki/Companion_matrix http://en.wikipedia.org/wiki/State_space_representation#Canonical_realizations
The roots are eigenvalues. What is the operator they are eigenvalues of? It is the shift operator.
Consider the vector space $V$ of sequences $a_0, a_1, a_2, ...$ (say of complex numbers). Define the left shift operator
$$S(a)_i = a_{i+1}.$$
In other words, $S$ takes input a sequence $a_0, a_1, a_2, ...$ and returns the sequence $a_1, a_2, a_3, ...$. Now we need the following three fundamental observations.
Observation 1: The linear homogeneous recurrence with characteristic polynomial $p$ is precisely described by the equation $p(S) a = 0$. In other words, the sequences satisfying such a linear recurrence are precisely the sequences in the kernel of $p(S)$.
Observation 1.5: The shift operator $S$ acts on the space of solutions to any equation of the form $p(S)a = 0$.
Observation 2: Over the complex numbers, we can factor $p(S) = \prod (S - \lambda_i)$. Thus if $(S - \lambda_i) a = 0$, or equivalently if $a$ is an eigenvector of $S$ with eigenvalue $\lambda_i$, then $p(S) a = 0$.
Observation 3: $(S - \lambda_i) a = 0$ if and only if $a_n = c \lambda_i^n$ for some constant $c$.
$S$ resembles in some sense the differentiation operator, which it is sent to if one sends $V$ to the space of formal power series via
$$(a_0, a_1, a_2, ...) \mapsto \sum_{n=0}^{\infty} \frac{a_n}{n!} x^n.$$
Thus everything above applies (at least formally) to linear homogeneous ODEs with constant coefficients and their characteristic polynomials as well. The corresponding eigenvectors are the functions $e^{\lambda_i x}$.