Is it normal that a pure math student doesn't know vector analysis?
Today I was watching a series of online video lectures about electromagnetism. At some point of the lecture, the professor used this vector calculus identity: $$ \nabla\times\left(\mathbf{A}\times\mathbf{B}\right)=\mathbf{A}\left(\nabla\cdot\mathbf{B}\right)-\mathbf{B}\left(\nabla\cdot\mathbf{A}\right)+\left(\mathbf{B}\cdot\nabla\right)\mathbf{A}-\left(\mathbf{A}\cdot\nabla\right)\mathbf{B}$$
So, I tried to prove it by using the "bac-cab" identity and I kept getting wrong results. Later, I realized that the "bac-cab" identity for vector triple products doesn't hold anymore when vector operators like $\nabla$ are involved!
Is it normal that I don't know any vector analysis? I've taken multi-variable calculus, but they didn't teach us anything about such identities! Did my university cheat on us and taught us less than we should know or it's normal for pure math students not to know vector analysis identities as well as engineers and physicists know them?
And at last, I'll appreciate it if someone introduces a good introductory book about vector calculus that teaches me how to deal with such vector equations and identities.
The comments made thus far give excellent advice. I thought you might like to see the details.
We need the well-known identity $\sum_{j=1}^{3} \epsilon_{ikj}\epsilon_{lmj} = \delta_{il}\delta_{km}-\delta_{kl}\delta_{im}$. This is the dark heart of the BAC-CAB identity. \begin{align} \notag \nabla \times (\vec{A} \times \vec{B}) &= \sum_{i,j,k=1}^3 \epsilon_{ijk}\partial_i (\vec{A} \times \vec{B})_j\widehat{x}_k \\ &= \sum_{i,j,k=1}^3 \epsilon_{ijk}\partial_i \left(\sum_{l,m=1}^3A_lB_m\epsilon_{lmj} \right) \widehat{x}_k \\ &= \sum_{i,j,k=1}^3\sum_{l,m=1}^3 \epsilon_{ijk}\epsilon_{lmj}\partial_i \left(A_lB_m \right) \widehat{x}_k \\ &= -\sum_{i,j,k,l,m=1}^3 \color{red}{\epsilon_{ikj}\epsilon_{jlm}}\partial_i \left(A_lB_m \right) \widehat{x}_k \\ &= -\sum_{i,k,l,m=1}^3( \color{red}{\delta_{il}\delta_{km}-\delta_{im}\delta_{kl}})\partial_i \left(A_lB_m \right) \widehat{x}_k \\ &= -\sum_{i,k,l,m=1}^3\delta_{il}\delta_{km}\partial_i \left(A_lB_m \right) \widehat{x}_k+\sum_{i,k,l,m=1}^3\delta_{im}\delta_{kl}\partial_i \left(A_lB_m \right) \widehat{x}_k \\ &= -\sum_{i,k=1}^3\partial_i \left(A_iB_k \right) \widehat{x}_k+\sum_{i,k=1}^3\partial_i \left(A_kB_i \right) \widehat{x}_k \\ &= -\sum_{i,k=1}^3\left((\partial_i A_i)B_k+A_i\partial_i B_k \right) \widehat{x}_k+\sum_{i,k=1}^3 \left((\partial_iA_k)B_i+A_k\partial_iB_i \right) \widehat{x}_k \\ &= -\sum_{i,k=1}^3(\partial_i A_i)B_k\widehat{x}_k-\sum_{i,k=1}^3A_i\partial_i B_k\widehat{x}_k +\sum_{i,k=1}^3 B_i\partial_iA_k\widehat{x}_k+\sum_{i,k=1}^3(\partial_iB_i)A_k \widehat{x}_k \\ &= -(\nabla \cdot \vec{A})\vec{B}-(\vec{A} \cdot \nabla )\vec{B}+(\nabla \cdot \vec{B})\vec{A}+(\vec{B} \cdot \nabla )\vec{A} \end{align} If it's any consolation, I was a math & physics double major and this stuff escaped me until I had the good fortune in graduate school of studying with a student from Spain. I had pages and pages of stuff and he had three lines on a particular problem. It hit me, I probably should use $\epsilon_{ijk}$ for vector identity calculations. Moreover, there is a whole family of contracting levi-civita symbols as sums of antisymmetric kronecker deltas. The identity this post begins with is just a start. These are used in the tensor calculus of General Relativity.
I'm going to prove two other vector calculus identities to check if I've understood physicists notations well :D I'm going to use Einstein's summation notation as well. ;-)
1. $$\nabla \cdot (\mathbf{A} \times \mathbf{B}) = \mathbf{B} \cdot (\nabla \times \mathbf{A}) - \mathbf{A} \cdot (\nabla \times \mathbf{B})$$
$$\nabla \cdot (\mathbf{A} \times \mathbf{B}) = \partial_i(\mathbf{A}\times\mathbf{B})_i=\partial_i(\epsilon_{ijk}A_jB_k)=\epsilon_{ijk}\partial_i(A_jB_k)=\epsilon_{ijk}A_j\partial_iB_k+\epsilon_{ijk}B_k\partial_iA_j$$
$$\epsilon_{ijk}A_j\partial_iB_k+\epsilon_{ijk}B_k\partial_iA_j=-\epsilon_{jik}A_j\partial_iB_k+\epsilon_{kij}B_k\partial_iA_j=-A_j(\epsilon_{jik}\partial_iB_k)+B_k(\epsilon_{kij}\partial_iA_j)$$ $$-A_j(\epsilon_{jik}\partial_iB_k)+B_k(\epsilon_{kij}\partial_iA_j)=-A_j(\nabla \times \mathbf{B})_j+B_k(\nabla \times \mathbf{A})_k = -\mathbf{A}\cdot(\nabla \times \mathbf{B})+\mathbf{B}\cdot(\nabla \times \mathbf{A})$$
2. $$ \nabla \times \left( \nabla \times \mathbf{A} \right) = \nabla(\nabla \cdot \mathbf{A}) - \nabla^{2}\mathbf{A}$$
where $\nabla^{2}\mathbf{A}=\langle \nabla^{2}A_x, \nabla^{2}A_y, \nabla^{2}A_z\rangle$ and it's called vector Laplacian.
$$(\nabla \times \left( \nabla \times \mathbf{A} \right))_i = \epsilon_{ijk}\partial_j(\nabla\times \mathbf{A})_k=\epsilon_{ijk}\partial_j(\epsilon_{kmn}\partial_mA_n)=\epsilon_{ijk}\epsilon_{kmn}\partial_j(\partial_mA_n)$$ Now we use the famous equality: $$\epsilon_{ijk}\epsilon_{kmn}=\delta_{im}\delta_{jn}-\delta_{in}\delta_{jm}$$
$$(\nabla \times \left( \nabla \times \mathbf{A} \right))_i= \delta_{im}\delta_{jn}\partial_j(\partial_mA_n)-\delta_{in}\delta_{jm}\partial_j(\partial_mA_n)=\partial_j(\partial_iA_j)-\partial_j(\partial_jA_i)$$ $$\partial_j(\partial_iA_j)-\partial_j(\partial_jA_i)=\partial_i(\partial_jA_j)-(\partial_j\partial_j)A_i=\partial_i(\nabla \cdot\mathbf{A})-(\nabla^{2}\mathbf{A})_i=(\nabla(\nabla \cdot \mathbf{A})-\nabla^{2}\mathbf{A})_i$$
I'm sure that the last line needs some modifications because the LHS is the i-th component of $\nabla \times \left( \nabla \times \mathbf{A} \right)$ while in RHS a vector ($\nabla^{2}\mathbf{A}$) has showed up sooner than it should appear.
So, I've made the following assumptions in my calculations (Please verify them):
Partials always act like differentials, regardless of the subscripts involved in our calculations. $\partial_iX_jY_k = (\partial_iX_j)Y_k + X_j(\partial_iY_k)$.
If we assume the existence and continuity of second derivatives involved in our calculations, then since partial derivatives commute we have $\partial_i\partial_j= \partial_j\partial_i$.
I've assumed that $\epsilon_{ijk}$ is a constant function and it can come out of partial derivatives with ease.
EDIT:
I want to prove that:
$$ \nabla(\mathbf{A} \cdot \mathbf{B}) = (\mathbf{A} \cdot \nabla)\mathbf{B} + (\mathbf{B} \cdot \nabla)\mathbf{A} + \mathbf{A} \times (\nabla \times \mathbf{B}) + \mathbf{B} \times (\nabla \times \mathbf{A}) $$
$$(\nabla(\mathbf{A} \cdot \mathbf{B}))_i = \partial_i (\mathbf{A}\cdot\mathbf{B})=\partial_i(A_jB_j)=\partial_i(A_j)B_j+\partial_i(B_j)A_j$$
I don't see how I should move forward. I'm stuck.