Is it faster to multiply a matrix by its transpose than ordinary matrix multiplication?

After Sivaram's suggestion I'm turning my comment into an answer.

In general, $(AB)^\textrm{T}=B^\textrm{T}A^\textrm{T}$, so for the product of a matrix with its transpose, you have $AA^\textrm{T}=(AA^\textrm{T})^\textrm{T}$ is symmetric. You can also see this directly by observing that if the $ij$ entry of $A$ is $a_{ij}$, then the $ij$ entry of $AA^\textrm{T}$ is $\sum_{k=1}^n a_{ik}a_{jk}$, which is the same if you switch $i$ and $j$. Therefore if you compute the entries with $i\leq j$, you get the entries with $i>j$ for free. There are $1+2+3+\cdots+n=\frac{n(n+1)}{2}$ such entries, instead of the $n^2$ you generally need.

(I was thinking of $n$-by-$n$ matrices, but if your input is $m$-by-$n$, this will give you $\frac{m(m+1)}{2}$ entry computations, each of which is a sum of $n$ products.)