Generalizing Determinants Through Multilinear Algebra and Immanants
Solution 1:
Here is a neat interpretation of the immanant, which (to my dismay) already appeared in the paper referenced in the first comment. Regardless, I will plough ahead and show you the result from my perspective (even if this answer is getting a little long!). The answer is the affirmative: the Schur functors are related to the immanent, and indeed they "turn into it via the trace", though not in the way I was expecting.
Firstly, given an endomorphism $T: V \to V$, I will call $S_\lambda T: S_\lambda V \to S_\lambda V$ the result of applying the Schur functor with partition $\lambda$: what I call $S_\lambda T$ is what you called $\mathrm{det}_\lambda(T)$. This map does show up in the literature because of the functorality of the Schur functor: it produces new spaces, as well as new maps between those spaces. Hopefully the rest of this answer will show you a non-trivial application of this functor, namely that you can find the immanant $\mathrm{Imm}_\lambda$ using $S_\lambda$.
Next, given a linear transformation $A: V \to V$, the immanant $\mathrm{Imm}_\lambda(A)$ makes no sense, because it is not invariant under a change of basis. You can see this by considering the permanent on a $2 \times 2$ matrix full of $1$'s, and on the (orthogonally similar) matrix $\mathrm{diag}(2, 0)$. So whenever we talk about the immanent, we need to either talk about a matrix directly, or a linear map along with a specified basis.
Putting this technicality aside for the moment (we will stick to the standard basis only), an interesting observation is that we can find the permanent sitting in the symmetric power! $\mathrm{Sym}^2 \mathbb{C}^2$ has basis $(e_1^2, e_1 e_2, e_2^2)$, and we can compute the action of a $2 \times 2$ matrix $$A = \begin{pmatrix}a & b \\ c & d\end{pmatrix}$$ on this basis: $$ \begin{aligned} (\mathrm{Sym}^2 A) e_1 e_2 = (A e_1) (A e_2) &= (a e_1 + c e_2)(b e_1 + de_2) \\ &= ab e_1^2 + (ad + bc)e_1 e_2 + cd e_2^2 \end{aligned}$$ and so it seems that the permanent is the $(e_1 e_2, e_1 e_2)$ matrix entry of the transformation $\mathrm{Sym}^2 A$ (in this specific basis!). It should be clear that this extends to $n \times n$ matrixes. If you do a similar computation for the Schur functor for the partition $\lambda = (2, 1)$ on a $3 \times 3$ matrix, you can find that the immanant $ \operatorname{Imm}_\lambda(A)$ can be found as the sum of the two diagonal matrix entries corresponding to standard tableaux (recall that $S_\lambda \mathbb{C}^n$ has basis the semistandard tableaux on letters $[1, \ldots, n]$).
Which leads us to the following theorem, which (more or less) appears as Theorem 3 in the paper referenced in the first comment. Let $A \in \mathrm{Mat}_n \mathbb{C}$ be an $n \times n$ matrix, and $\lambda$ a partition of $n$. $\mathrm{GL}_n(\mathbb{C})$ acts naturally on $\mathbb{C}^n$, and so we may apply the Schur functor to get $S_\lambda A: S_\lambda \mathbb{C}^n \to S_\lambda \mathbb{C}^n$. Taking diagonal matrices in $\mathrm{GL}_n(\mathbb{C})$ as the standard torus, $S_\lambda \mathbb{C}^n$ decomposes as a sum of weight spaces. Denote by $W \subseteq S_\lambda \mathbb{C}^n$ the $(1, 1, \ldots, 1)$-weight space, such that $\mathrm{diag}(x_1, \ldots, x_n) w = x_1 \cdots x_n w$ for any $\mathrm{diag}(x_1, \ldots, x_n) \in \mathrm{GL}_n(\mathbb{C})$ and $w \in W$. Denote by $S_\lambda A |_W$ the part of $S_\lambda A$ restricted to $W$ (the block on the diagonal in the block matrix decomposition). Then $\mathrm{Imm}_\lambda(A) = \operatorname{trace} S_\lambda A|_W$.
You can quickly check that this agrees in the determinant and permanent cases, by the above observation. Also note that we seem to have replaced a very coordinate-dependent definition with a coordinate-free definition: we still have the coordinates hanging around, but they've been swallowed up into how we chose the torus inside $\mathrm{GL}_n$, or equally well, how we put a $\mathrm{GL}_n$ representation on $\mathbb{C}^n$. Also note that $S_\lambda A |_W$ is nonstandard notation, since $W$ is not an invariant subspace of $S_\lambda A$: you need to artificially restrict its output using the weight space decomposition of $S_\lambda \mathbb{C}^n$.
And now the proof. (I would readily welcome suggestions to simplify/correct this). Fix the notation in the statement of the theorem. Consider $(\mathbb{C}^n)^{\otimes n}$, which is a $(\mathrm{GL}_n, S_n)$ bimodule, with the usual left action of $\mathrm{GL}_n$ and the right $S_n$ action $v_1 \otimes \cdots \otimes v_n \cdot \sigma = v_{\sigma^{-1}(1)} \otimes \cdots \otimes v_{\sigma^{-1}(n)}$. It is well-known that these actions commute and centralise each other, and the bimodule decomposes as $$ (\mathbb{C}^n)^{\otimes n} = \bigoplus_{\mu \, \vdash \, n} S_\mu \mathbb{C}^n \otimes \Sigma^\mu $$ where $\Sigma^\mu$ denotes the irreducible $S_n$ representation associated to $\mu$. In order to find the immanant $\mathrm{Imm}_\lambda A$, we will examine the operator $A^{\otimes n}$ acting on $(\mathbb{C}^n)^{\otimes n}$, project it onto the correct partition $\lambda$, further restrict it to the $(1, \ldots, 1)$ weight space, and finally take the trace.
The operator $A$ may be written in standard coordinates as $A = \sum_{i, j} a_{ij} e_j \otimes e_i^*$, and so its $n$th tensor power is $$A^{\otimes n} = \sum_{\mathbf{i}, \mathbf{j}} a_{i_1 j_1} \cdots a_{i_n j_n} e_{j_1} \otimes \cdots \otimes e_{j_n} \otimes e_{i_1}^* \otimes \cdots \otimes e_{i_n}^*$$ where the multi-indices $\mathbf{i}, \mathbf{j}$ range over all length-$n$ sequences with entries in $[1, n]$. Restricting this to the weight space $(1, \ldots, 1)$ means discarding all components coming from multi-indices $\mathbf{i}, \mathbf{j}$ which are not permutations, and so we have $$A^{\otimes n}|_W = \sum_{\pi, \nu \in S_n} a_{\pi(1),\nu(1)} \cdots a_{\pi(n),\nu(n)} e_{\nu(1)} \otimes \cdots \otimes e_{\nu(n)} \otimes e_{\pi(1)}^* \otimes \cdots \otimes e_{\pi(n)}^*$$
Denote by $P_\lambda \in \mathbb{C}[S_n]$ the projector onto the Specht module $\Sigma^\lambda$: it may be written $$ P_\lambda = \frac{\dim \Sigma^{\lambda}}{n!} \sum_{\sigma \in S_n} \chi_\lambda(\sigma) \sigma^{-1}$$ and so the trace of $S_\lambda A |_W$ will be almost the same as the trace of $(A^{\otimes n} |_W) P_\lambda$, but we have to account for the multiplicity space $\Sigma^\lambda$, by multiplying by $\frac{1}{\dim \Sigma^\lambda}$. So,
$$\begin{aligned} \operatorname{trace} (S_\lambda A |_W) &= \frac{1}{\dim \Sigma^\lambda} \mathrm{trace} (A^{\otimes n} |_W P_\lambda) \\ &= \frac{1}{n!} \sum_{\sigma \in S_n} \chi_\lambda(\sigma) \operatorname{trace} \left(\sum_{\pi, \nu \in S_n} a_{\pi(1),\nu(1)} \cdots a_{\pi(n),\nu(n)} e_{\nu(1)} \otimes \cdots \otimes e_{\nu(n)} \otimes e_{\pi(1)}^* \otimes \cdots \otimes e_{\pi(n)}^* \right) \cdot \sigma^{-1} \\ &= \frac{1}{n!} \sum_{\sigma \in S_n} \chi_\lambda(\sigma) \operatorname{trace} \left(\sum_{\pi, \nu \in S_n} a_{\pi(1),\nu(1)} \cdots a_{\pi(n),\nu(n)} e_{\sigma \nu(1)} \otimes \cdots \otimes e_{\sigma \nu(n)} \otimes e_{\pi(1)}^* \otimes \cdots \otimes e_{\pi(n)}^* \right) \\ &= \frac{1}{n!} \sum_{\sigma \in S_n} \chi_\lambda(\sigma) \left(\sum_{\pi, \nu \in S_n; \sigma \nu = \pi} a_{\pi(1),\nu(1)} \cdots a_{\pi(n),\nu(n)} \right) \\ &= \sum_{\sigma \in S_n} \chi_\lambda(\sigma) a_{1, \sigma(1)} \cdots a_{n, \sigma(n)} \\ \end{aligned}$$
where the fourth line follows from the fact that $e_i^* (e_j) = \delta_{ij}$, and the last line from the usual sum over symmetric group nonsense.
Now I also have some follow-up questions:
- We've found that $\operatorname{Imm}_\lambda A$ is the trace of $S_\lambda A|_W$: what does the rest of $S_\lambda A|_W$ mean?
- Is there a nicer way of proving the above theorem?
And finally, please tell me if my answer needs cleaning up or clarification!