Solution 1:

0) Let $V$ be a (finite-dimensional) real vector space. A $(1, 1)$-tensor over $V$ is any of the following equivalent objects:

  • A linear transformation $V \to V$,
  • An element of $V^{\ast} \otimes V$,
  • A linear map $V \otimes V^{\ast} \to \mathbb{R}$.

The isomorphism between these three pictures comes from the definition of the dual and the universal property of the tensor product. A linear transformation is an object such that, when you feed it an element of $V$, it spits out an element of $V$. So if you feed it an element of $V$ and an element of $V^{\ast}$, then by the dual pairing (tensor contraction) you get an element of $\mathbb{R}$. Explicitly, if $f : V \to V$ is a linear transformation, then

$$\langle f(v), w^{\ast} \rangle : V \times V^{\ast} \to \mathbb{R}$$

is bilinear (where $\langle \cdot, \cdot \rangle$ denotes the dual pairing), and every bilinear map can be uniquely written this way. Then one uses the universal property of the tensor product. That gives you the identification with the third picture. The identification with the second picture comes from the fact that dual distributes over tensor product (which again comes down to tensor contraction) and the fact that $V^{\ast \ast} \simeq V$. Alternately, again by tensor contraction, there is a natural bilinear map $V \times (V^{\ast} \otimes V) \to V$ which identifies an element of $V^{\ast} \otimes V$ with a linear transformation $V \to V$.

You get the "explicit" version of all these pictures by taking a basis $e_1, ... e_n$ of $V$ and the corresponding dual basis $e_1^{\ast}, ... e_n^{\ast}$ of $V^{\ast}$. These define bases of every space of $(m, n)$-tensor (just multiply). In particular the space of $(1, 1)$-tensors has basis $e_i^{\ast} \otimes e_j, 1 \le i, j \le n$. Writing a $(1, 1)$-tensor in this basis corresponds exactly to writing a linear transformation as a square matrix.

1) It's not. If by $\text{Alt}^p(V)$ you mean the space of alternating $p$-linear maps $V^p \to \mathbb{R}$, this is naturally isomorphic to $\Lambda^p (V^{\ast})$ where $V^{\ast}$ is the dual space to $V$ (the space of linear maps $V \to \mathbb{R}$). You can't identify $V$ with its dual without, say, an inner product. To see the isomorphism, $(V^{\ast})^{\otimes p}$ can naturally be identified with the space of all $p$-linear maps $V^p \to \mathbb{R}$, more or less by definition of the dual space and tensor product, and taking the subspace of alternating maps in one picture corresponds to taking the quotient in the other. You can make this really explicit by writing down a basis for both spaces if you want.

2) This all comes down to tensor contraction. Locally a $p$-form is the same thing as an alternating $p$-linear map $V^p \to \mathbb{R}$ where $V$ is the tangent space, and by the universal property of the exterior power this is the same thing as a linear map $\Lambda^p V \to \mathbb{R}$. And a $p$-vector is the same as an element of $\Lambda^p V$. So the natural dual pairing (tensor contraction) comes into play.

Solution 2:

This interpretation makes judicious use of duality. Recall that $Hom_R(A\otimes B, C)\cong Hom_R(A,Hom_R(B,C))$ (the proof of this fact is extremely simple. Please give it a try.). With finite dimensional vector spaces, we have a canonical isomorphism $V\cong V^{**}=Hom_R(Hom_R(V,R),R)$. Then to give a map $A\to B$, this is the same as giving a map $A\to B^{**}$ by composition, and by the earlier note about the hom-tensor adjunction, this is identical to giving a map $A\otimes B^*\to R$.

With regards to your two questions, they fall out from these elementary observations. I urge you to try to work them out.