Finding the rotation matrix in n-dimensions
One way to do this is to find two orthonormal vectors in the plane generated by your two vectors, and then extend it to an orthonormal basis of $\mathbb R^n$. Then with respect to this basis, consider the rotation by angle $\theta$ in the plan generated by the first two vectors, and the identity on the space generated by the rest of the orthonormal basis. Use Gram-Schmidt to find the orthonormal basis.
As you said in a previous comment, you cannot rotate around an axis except in 3D. Rather you need to rotate about an $n-2$-dimensional subspace.
So suppose you want to rotate $x$ to $y$, and you happen to know they are the same norm. Let $u = x/|x|$, and $v = (y-(u.y)u)/|y-(u.y)u|$. Then $P=uu^T + vv^T$ is a projection onto the space generated by $x$ and $y$, and $Q=I-uu^T-vv^T$ is the projection onto the $n-2$-dimensional complemented subspace. So the "rotation" part just has to take place on the range of $P$. That is, $z \mapsto (z.u,z.v)$ is a isomorphic isometry of the range of $P$ to $\mathbb R^2$. Do the rotation on $\mathbb R^2$. Then map this back to $\mathbb R^n$ by $[a,b]^T \mapsto au + bv$.
So the whole rotation is $$ I - uu^T -vv^T + [u\ v]R_\theta[u\ v]^T $$ where $$ R_\theta = \begin{bmatrix}\cos(\theta) & -\sin(\theta) \\ \sin(\theta & \cos(\theta) \end{bmatrix} $$ where $\cos(\theta) = x\cdot y/(|x||y|)$.
(By the way, if a reflection matrix is OK, then consider Householder transformations instead.)
Here is some MATLAB code (but tested using octave so maybe it has syntax errors in matlab).
x=[2,4,5,3,6]';
y=[6,2,0,1,7]';
u=x/norm(x);
v=y-u'*y*u;
v=v/norm(v);
cost=x'*y/norm(x)/norm(y);
sint=sqrt(1-cost^2);
R = eye(length(x))-u*u'-v*v' + [u v]* [cost -sint;sint cost] *[u v]';
fprintf('testing\n');
should_be_identity = R*R'
should_be_y = R*x
Are you familiar with quaternions? Clifford algebra offers a good $N$-dimensional generalization to quaternions.
Let me take your example. You have two vectors $x$ and $y$:
$$\begin{align*} x &= 2e_1 + 4e_2 + 5e_3 + 3e_4 + 6 e_5 \\ y &= 6 e_1 + 2 e_2 + 0 e_3 + 1e_4 + 7 e_5\end{align*}$$
With quaternions, one would compute a cross product to find the rotation axis. Using clifford algebra, one directly computes the plane of rotation using the wedge product of vectors, as one would do in exterior algebra. (A general term for such geometric primitives is blade. Vectors are just 1-dimensional blades; here, we will compute a 2-dimensional blade.)
The wedge product is
$$\begin{align*}x \wedge y = (4 - 24) e_{12} &+ (0 -30) e_{13} + (2 - 18) e_{14} + (14 - 36) e_{15} \\ +(0 - 10) e_{23} &+ (4 - 6) e_{24} + (28 - 12) e_{25} \\ + (5 - 0) e_{34} &+ (35 - 0) e_{35} \\ + (21 - 6) e_{45} & \end{align*}$$
You should appreciate how direct this is compared to finding the null space of a matrix; when you're finding the null space, you're finding the orthogonal complement to this blade; the wedge product allows us to proceed directly instead.
Anyway, to proceed as one would with quaternions, we must made the blade unit. This is similar to the vector case--simply sum the squares of the coefficients and divide by the square root. I will not carry out this arithmetic here.
Once one has the normalized blade $\hat B$, one can construct the versor $R = \exp(-\hat B \theta/2)$. The minus sign is conventional in clifford algebra, to correspond to counterclockwise rotation. This is different from how quaternions usually do.
The versor $R$ is then used to construct a rotation map $\underline R$ that takes in any vector $a$ and produces its rotated counterpart:
$$\underline R(a) = Ra R^{\dagger} = e^{-\hat B \theta/2} a e^{\hat B \theta/2}$$
The quantity $\hat B$ obeys $\hat B^2 = -1$, and so this generates the trigonometric functions through power series, just as in quaternions.
The products here are the geometric product of clifford algebra. This can be computed in terms of basis vectors like so: for any $i, j = 1, 2, \ldots, N$, set $e_i e_i = +1$, and when $i \neq j$, $e_i e_j = - e_i e_j$.
For instance, say that $R$ takes the form $\alpha - \beta e_{12}$ and $a = a^1 e_1 + a^2 e_2$. The computation would then look like
$$\begin{align*}\underline R(a) &= (\alpha - \beta e_1 e_2)(a^1 e_1 + a^2 e_2)(\alpha + \beta e_1 e_2) \\ &= (\alpha a - \beta a^1 e_1 e_2 e_1 - \beta a^2 e_1 e_2 e_2)(\alpha + \beta e_1 e_2) \\ &= (\alpha a + \beta a^1 e_2 - \beta a^2 e_1) (\alpha + \beta e_1 e_2) \\ &= \ldots \end{align*}$$
(I won't finish this calculation; I just hoped to illustrate how geometric products are computed.)
This is the most direct rotation that connects the two directions. There could be other, subsequent rotations that rotate other vectors orthogonal to these while leaving these two intact, so there are several rotation maps that map these vectors as you specified.
Once you have the rotation map $\underline R$, you can extract the matrix components by plugging in basis vectors and taking dot products. $\underline R_{12} = e_1 \cdot \underline R(e_2)$, for instance. But if you actually go to all the trouble to write something that can compute this rotation in terms of clifford algebra, it makes one wonder why you would even bother with a matrix. There is simply no need.
A special use of this clifford algebra formulation is that, using the geometric product, it can be used to rotate higher-dimensional blade, not just vectors. Indeed, for a general blade $K$, the formula is just
$$\underline R(K) = R K R^\dagger$$
This is a special property of rotations. If you have a direct expression for the blade $K$, you need not bother decomposing it into vectors or rotate those vectors individually to recover the image of $K$. This is handy.
To find the rotation matrix:
Find the plane $P$ containing the two vectors and the origin (always possible).
Compute the angle between the two vectors. Call this angle $\alpha.$
Rotate by $\alpha$ around the axis which is the normal vector to $P.$
EDIT For more detail see: http://mathforum.org/library/drmath/view/74532.html