Given two basis sets for a finite Hilbert space, does an unbiased vector exist?
Let $\{A_n\}$ and $\{B_n\}$ be two bases for an $N$-dimensional Hilbert space. Does there exist a unit vector $V$ such that:
$$(V\cdot A_j)\;(A_j\cdot V) = (V\cdot B_j)\;(B_j\cdot V) = 1/N\;\;\; \ \text{for all} \ 1\le j\le N?$$
Notes and application:
That the $\{A_n\}$ and $\{B_n\}$ are bases means that
$$(A_j\cdot A_k) =\left\{\begin{array}{cl}
1&\;\text{if }j=k,\\
0&\;\text{otherwise}.\end{array}\right.$$
In the physics notation, one might write $V\cdot A_j = \langle V\,|\,A_j\rangle$. In quantum mechanics, $P_{jk} = |\langle A_j|B_k\rangle|^2$ is the "transition probability" between the states $A_j$ and $B_k$. "Unbiased" means that there is no preference in the transition probabilities. A subject much studied in quantum information theory is "mutually unbiased bases" or MUBs. Two mutually unbiased bases satisfy
$|\langle A_j|B_k\rangle|^2 = 1/N\;\;$ for all $j,k$.
If it is true that the vector $V$ always exists, then one can multiply the rows and columns of any unitary matrix by complex phases so as to obtain a unitary matrix where each row and column individually sums to one.
If true, then $U(n)$ can be written as follows:
$$U(n) = \exp(i\alpha)
\begin{pmatrix}1&0&0&0...\\0&e^{i\beta_1}&0&0...\\0&0&e^{i\beta_2}&0...\end{pmatrix}
M
\begin{pmatrix}1&0&0&0...\\0&e^{i\gamma_1}&0&0...\\0&0&e^{i\gamma_2}&0...\end{pmatrix}$$
where the Greek letters give complex phases and where $M$ is a "magic" unitary matrix, that is, $M$ has all rows and columns individually sum to 1.
And $M$ can be written as $M=\exp(im)$ where $m$ is Hermitian and has all rows and columns sum to 0. What's significant about this is that the $m$ form a Lie algebra. Thus unitary matrices can be thought of as complex phases, plus a Lie algebra. This is a new decomposition of unitary matrices.
Since $m$ is Hermitian and has all rows and columns sum to 0, it is equivalent to an $(n-1)\times(n-1)$ Hermitian matrix with no restriction on the row and column sums. And this shows that $U(n)$ is equivalent to complex phases added to an object (the $M$ matrices) that is equivalent to $U(n-1)$. This gives a recursive definition of unitary matrices entirely in terms of complex phases.
Solution 1:
I believe the answer to be yes, and it follows by some symplectic geometry of Lagrangian intersections.
Let $U$ be the unitary matrix so that $B_j = U A_j$. Without loss of generality, we will also assume that $A_j = e_j$. This means that $B_j = U e_j$.
We will identify $\mathbb C^N = \mathbb R^{2N}$.
Then, the first condition on the vector $V$ is that: $$ |(V, e_j)|^2 = \frac{1}{N}, j=1, \dots, N $$ This is equivalent to saying that $V = \frac{1}{\sqrt{N}} \sum \mathrm e^{i \theta_j} e_j$, or, in other words, that $V$ lies in the Lagrangian torus in $\mathbb R^{2N}$ with the standard symplectic structure $\sum dx_j \wedge dy_j$, defined by $\{ |x_j|^2 + |y_j|^2 = \frac{1}{\sqrt{N}} \}$.
The second condition on $V$ is that $$ |(V, U e_j)|^2 = \frac{1}{N}. $$ Thus, $U^* V$ also should lie in the torus $L$. Thus, the vector $V$ exists if and only if $L \cap UL$ is non-empty.
(Note the first condition gives automatically that $V$ is a unit vector.)
Right now, I don't see how to take advantage of the linearity in this problem, so I will use an incredibly high powered theory (Floer theory). If I think of a better solution, I will update.
Notice that the action of $U$ on $\mathbb C^N$ induces a map on $\mathbb CP^{N-1}$. Furthermore, if we write $U=\mathrm{e}^{iH}$ for a Hermitian $H$, then $U$ is the time-1 map of the Hamiltonian flow generated by the Hamiltonian $$h(v) = \frac{1}{2} \Re (v, Hv).$$
Finally, we note that $L$ projects to the Clifford torus $L'$ in $\mathbb CP^{N-1}$. It is known for Floer theoretic reasons (not sure who first proved it... there are now many proofs in the literature) that the Clifford torus is not Hamiltonian displaceable, so there must always exist an intersection point. After normalizing, this lifts to an intersection point in $\mathbb C^N$, as desired.
Note that the Floer homology argument is a very powerful tool. I suspect that a much simpler proof can be found, since this doesn't use the linear structure.
EDIT: Apparently my use of the term "Clifford torus" is non-standard. Here is what I mean by it: Consider $\mathbb CP^{N-1}$ as the quotient of the unit sphere in $\mathbb C^{N}$ by the $S^1$ action by multiplication by a unit complex number (as we have defined here). In the unit sphere there is a real $N$ dimensional torus given by $|z_1| = |z_2| = \dots = |z_N| = \frac{1}{\sqrt{N}}$. The image of this $N$ dimensional torus by the quotient map is an $N-1$ dimensional torus in $\mathbb CP^{N-1}$. Equivalently, it is the torus given in homogeneous coordinates on $\mathbb CP^{N-1}$ by $[e^{i \theta_1}, e^{i \theta_2}, \dots, e^{i \theta_{N-1}}, 1]$.
Solution 2:
This is not a full answer, but I don't intend to work on this in the next two weeks ;-), so I thought I'd put it up here and perhaps someone else can complete it.
Writing $U_{jk}=\langle A_j\mid B_k\rangle$ (with $U$ unitary), we can formulate the problem like this: $V$ must have a component of length $1/\sqrt{N}$ along each $B_k$, so we can write it as
$$V=\frac{1}{\sqrt{N}}\sum_k\mathrm{e}^{\mathrm{i}\phi_k}B_k\;.$$
Then the condition that the projections onto the $A_j$ also all have length $1/\sqrt{N}$ becomes
$$\sqrt{N}\langle A_j \mid V\rangle=\sum_k\langle A_j\mid\mathrm{e}^{\mathrm{i}\phi_k} B_k\rangle=\sum_k U_{jk}\mathrm{e}^{\mathrm{i}\phi_k}=\mathrm{e}^{\mathrm{i}\theta_j}\;.$$
To manipulate this more easily, we can introduce diagonal matrices $\Phi$ and $\Theta$ with diagonal elements $\phi_k$ and $\theta_j$, respectively. Then the condition becomes
$$U\mathrm{e}^{\mathrm{i}\Phi}\vec{1}=\mathrm{e}^{\mathrm{i}\Theta}\vec{1}\;,$$
where $\vec{1}$ is the vector with all components $1$.
Now all unitary matrices $U$ can be written as $U=\mathrm{e}^{\mathrm{i}H}$ with $H$ Hermitian, and conversely every such exponential is a unitary matrix. Thus, we can always find a vector $V$ for any unitary $U$ if and only if we can always find $V$ for any Hermitian $H$. In particular, we can consider the one-dimensional family of unitary matrices $\mathrm{e}^{\mathrm{i}\lambda H}$ with Hermitian matrix $H$ and real number $\lambda$, and our problem then becomes showing that for arbitrary $H$ we can find $V$ for all $\lambda$. In this way we can consider the path from the identity, where we know that $V$ exists, to an arbitrary unitary matrix $\mathrm{e}^{\mathrm{i}\lambda H}$ and reduce the problem to a differential equation along this path. Thus, letting $\Phi$ and $\Theta$ (but not $H$) depend on $\lambda$, we get
$$\mathrm{e}^{\mathrm{i}\lambda H} \mathrm{e}^{\mathrm{i}\Phi(\lambda)}\vec{1}= \mathrm{e}^{\mathrm{i}\Theta(\lambda)}\vec{1}\;,$$
and differentiating with respect to $\lambda$ yields
$$H\mathrm{e}^{\mathrm{i}\lambda H} \mathrm{e}^{\mathrm{i}\Phi}\vec{1}+ \mathrm{e}^{\mathrm{i}\lambda H}\mathrm{e}^{\mathrm{i}\Phi}\Phi'\vec{1} = \mathrm{e}^{\mathrm{i}\Theta}\Theta'\vec{1}\;,$$
$$\mathrm{e}^{-\mathrm{i}\Theta}H\mathrm{e}^{\mathrm{i}\lambda H} \mathrm{e}^{\mathrm{i}\Phi}\vec{1}+ \mathrm{e}^{-\mathrm{i}\Theta}\mathrm{e}^{\mathrm{i}\lambda H}\mathrm{e}^{\mathrm{i}\Phi}\Phi'\vec{1} = \Theta'\vec{1}\;,$$
$$\mathrm{e}^{-\mathrm{i}\Theta}H \mathrm{e}^{\mathrm{i}\Theta}\vec{1}+ \mathrm{e}^{-\mathrm{i}\Theta}\mathrm{e}^{\mathrm{i}\lambda H}\mathrm{e}^{\mathrm{i}\Phi}\vec{\Phi}' = \vec{\Theta}'\;,$$
where the prime denotes the derivative with respect to $\lambda$, and $\vec{\Phi}'=\Phi\vec{1}$ and $\vec{\Theta}'=\Theta\vec{1}$ are real vectors containing the derivatives of the $\phi_k$ and $\theta_j$, respectively.
If we now take the perspective that we have reached a solution $\Phi$, $\Theta$ at a certain $\lambda$ and want to determine how $\Phi$ and $\Theta$ need to change with respect to $\lambda$ to maintain the condition along the path, then we can consider everything except for $\vec{\Phi}'$ and $\vec{\Theta}'$ as given, and we obtain a linear system of $N$ complex equations for the $2N$ real variables $\phi'_k$ and $\theta'_j$, which will have a unique solution in the general case. If we could somehow show that the system cannot become singular, it would follow that we have a well-defined and well-behaved system of first-order differential equations which for given initial conditions determines a unique solution for the $\phi_k$ and $\theta_j$ as a function of $\lambda$. Since we can start out with arbitrary angles $\phi_k=\theta_k$ at the identity, this would yield an $N$-dimensional family of solutions along each path. In case the system of linear equations can become singular, one might still be able to show that there is at least one member of this family for which it doesn't.
[Edit:] I just realized that that's actually a contradiction; there can't be an $N$-dimensional family of solutions along the path and yet unique derivatives $\phi'_k$ and $\theta'_j$, since the derivatives would differ depending on which family member one moves to. This is resolved by looking at the differentiated condition at the identity (i.e. $\lambda=0$), which we can write as
$$\mathrm{e}^{-\mathrm{i}\Theta}H\mathrm{e}^{\mathrm{i}\Theta}\vec{1}=\vec{\Theta}'-\vec{\Phi}'\;.$$
The right-hand side is real, and that gives a condition on $\vec{\Theta}$ (and hence on $\vec{\Phi}=\vec{\Theta}$) at the identity that must be fulfilled in order for $\vec{\Theta}$ to be a suitable starting point for solutions along the path $\mathrm{e}^{\mathrm{i}\lambda H}$. These are $N$ reality conditions for $N$ real parameters, so one might be able to show that this condition has a solution. As for uniqueness, the solution will not be unique if $H$ has a canonical basis vector ( $\vec{e}_j$ with $e_{jk}=\delta_{jk}$) as an eigenvector (meaning that $A$ and $B$ share a basis vector up to phase), and this also leads to corresponding underdetermination in the linear system for $\vec{\Phi}'$ and $\vec{\Theta}'$. Thus if one wanted to use this approach to show uniqueness of the solution (Sam has already shown existence in the meantime), the appropriate conjecture might be that the solution is unique up to an arbitrary phase for each canonical basis vector that is an eigenvector of $H$.