Adding two polar vectors

Here is another way forward that relies on straightforward vector algebra. Let $\vec r_1$ and $\vec r_2$ denote vectors with magnitudes $r_1$ and $r_2$, respectively, and with angles $\phi_1$ and $\phi_2$, respectively.

Let $\vec r$ be the vector with magnitude $r$ and angle $\phi$ that denotes the sum of $\vec r_1$ and $\vec r_2$. Thus,

$$\vec r=\vec r_1+\vec r_2 \tag 1$$

From the definition of the inner product we have

$$\vec r_1\cdot \vec r_2=r_1r_2\cos(\phi_2-\phi_1) \tag 2$$

and

$$\vec r_1\cdot \vec r=r_1r\cos(\phi-\phi_1)\tag 3$$

Using $(1)$ and $(2)$, we find $r^2$ can be written

$$\begin{align} r^2&=\vec r\cdot \vec r\\\\ &=(\vec r_1+\vec r_2)\cdot (\vec r_1+\vec r_2)\\\\ &=\vec r_1\cdot \vec r_1+\vec r_2\cdot \vec r_2+2\vec r_1\cdot \vec r_2\\\\ &r_1^2+r_2^2+2r_1r_2\cos (\phi_2-\phi_1) \end{align}$$

and thus $r$ is given by

$$\bbox[5px,border:2px solid #C0A000]{r=\sqrt{r_1^2+r_2^2+2r_1r_2\cos (\phi_2-\phi_1)}}\tag 4$$

Using $(1)$, $(3)$, and $(4)$, yields

$$\begin{align} \vec r_1\cdot \vec r&=r_1r\cos(\phi-\phi_1)\\\\ &=\vec r_1\cdot (\vec r_1+\vec r_2)\\\\ &=r_1^2+r_1r_2\cos(\phi_1-\phi_2)\\\\ \end{align}$$

whereupon solving for $\cos (\phi-\phi_1)$ reveals

$$\cos(\phi-\phi_1)=\frac{r_1+r_2\cos(\phi_2-\phi_1)}{\sqrt{r_1^2+r_2^2+2r_1r_2\cos (\phi_2-\phi_1)}}\tag 5$$

We can easily obtain the expression for $\sin(\phi-\phi_1)$ by applying the cross product

$$\hat z\cdot(\vec r_1 \times \vec r)=\hat z\cdot(\vec r_1 \times \vec r_2)$$

which after straightforward arithmetic yields

$$\sin(\phi-\phi_1)=\frac{r_2\sin(\phi_2-\phi_1)}{\sqrt{r_1^2+r_2^2+2r_1r_2\cos(\phi_2-\phi_1)}} \tag 6$$

Dividing $(5)$ by $(6)$ and inverting shows that

$$\bbox[5px,border:2px solid #C0A000]{\phi =\phi_1+\operatorname{arctan2}\left(r_2\sin(\phi_2-\phi_1),r_1+r_2\cos(\phi_2-\phi_1)\right)} \tag 7$$

where the function $\operatorname{arctan2}(y,x)$ is described in this article.

Equations $(4)$ and $(7)$ provide the polar coordinates of $\vec r$ strictly in terms of the polar coordinates of $\vec r_1$ and $\vec r_2$. And the development of $(4)$, $(5)$, and $(6)$ did not appeal to Cartesian coordinates.


NOTE:

In a parallel development, we can express the sum of two complex numbers $z_1=r_1e^{i\phi_1}$ and $z_2=r_1e^{i\phi_2}$ in terms of their magnitudes and arguments.

First, recall that the inner product of two complex numbers is given by

$$\begin{align} \langle z_1,z_2 \rangle &=z_1 \bar z_2\\\\ &=r_1r_2e^{i(\phi_1-\phi_2)} \end{align}$$

where $\bar z$ denotes the complex conjugate of $z$.

Next, we let $z=re^{i\phi}=z_1+z_2$ be the sum of $z_1$ and $z_2$. The magnitude of $z$ is given by

$$\begin{align} r&=\sqrt{\langle z,z \rangle}\\\\ &=\sqrt{\langle z_1+z_2,z_1+z_2 \rangle}\\\\ &=\sqrt{r_1^2+r_2^2+r_1r_2\left(e^{i(\phi_1-\phi_2)}+e^{-i(\phi_1-\phi_2)}\right)}\\\\ &=\sqrt{r_1^2+r_2^2+2r_1r_2\cos (\phi_2-\phi_1)} \end{align}$$

Therefore, we have

$$\bbox[5px,border:2px solid #C0A000]{r=\sqrt{r_1^2+r_2^2+2r_1r_2\cos (\phi_2-\phi_1)}} \tag 8$$

Finally, we find the argument of $z$ by taking the inner product of $z$ and $z_1$. To that end, we write

$$\begin{align} \langle z,z_1 \rangle &=rr_1e^{i(\phi-\phi_1)}\\\\ &=\langle z_1+z_2,z_1 \rangle \\\\ &=r_1^2+r_1 r_2 e^{i(\phi_2-\phi_1)} \end{align}$$

which reveals that

$$e^{i(\phi-\phi_1)}=\frac{r_1+r_2e^{i(\phi_2-\phi_1)}}{\sqrt{r_1^2+r_2^2+2r_1r_2\cos (\phi_2-\phi_1)}} \tag 9$$

whereupon inverting yields

$$\bbox[5px,border:2px solid #C0A000]{\phi =\phi_1+\operatorname{arctan2}\left(r_2\sin(\phi_2-\phi_1),r_1+r_2\cos(\phi_2-\phi_1)\right)} $$

Equations $(8)$ and $(9)$ provide the polar coordinates of $z$ strictly in terms of the polar coordinates of $z_1$ and $z_2$. Again, this development did not appeal to Cartesian coordinates.


Here is a method using polar coordinates in a plane. (I do not think I want to attempt this in spherical coordinates or in any higher dimension.)

Given: a vector $v_1$ at angle $\theta_1$, of length $r_1$; another vector $v_2$ at angle $\theta_2$, of length $r_2$.

To find: the vector $v_3$ at angle $\theta_3$, of length $r_3$ such that $v_3 = v_1 + v_2$.

Procedure: find the difference between the angles $\theta_2$ and $\theta_1$, mapped to an equivalent angle of magnitude no greater than $\pi$ (using radian measure of angles; if you prefer to work in degrees, substitute $180$ wherever you see $\pi$). That is, set $$\alpha = \theta_2 - \theta_1 + 2n\pi$$ where $n$ is an integer chosen so that $-\pi \leq \alpha \leq \pi$. Then, using the Law of Cosines, set $$r_3 = \sqrt{r_1^2 + r_2^2 + 2 r_1 r_2 \cos\alpha}\,.$$ (This formula uses $+$ where the usual formula uses $-$ because $\alpha$ is an exterior angle of the triangle, rather than the interior angle used in the usual formula.)

You now have $r_3$.

Now if $\beta$ is the difference between the directions of $v_3$ and $v_1$, measured in the direction that gives the angle of the smallest possible magnitude, the area of the triangular region between $v_1$ and $v_3$ will be $\frac12 r_1 r_3 \sin\beta$. But the same triangular region also has area $\frac12 r_1 r_2 \sin\alpha$. So $$\frac12 r_1 r_3 \sin\beta = \frac12 r_1 r_2 \sin\alpha,$$ $$\sin\beta = \frac{r_2}{r_3} \sin\alpha.$$

This has two solutions for $\beta$. To find which solution applies, find $r_1 + r_2 \cos\alpha$. This is positive if $\beta$ is acute, negative if $\beta$ is obtuse. So take $$\beta = \begin{cases} \arcsin\left( \frac{r_2}{r_3} \sin\alpha \right) & \text{if }\ r_1 + r_2 \cos\alpha \geq 0, \\ \pi - \arcsin\left( \frac{r_2}{r_3} \sin\alpha \right) & \text{if }\ r_1 + r_2 \cos\alpha < 0. \end{cases}$$ Now let $$\theta_3 = \theta_1 + \beta.$$ If you prefer all your directions to be within certain bounds, for example you wish to have $0 \leq \theta_3 < 2\pi$, then set $\theta_3 = \theta_1 + \beta + 2m\pi$ where $m$ is an integer such that $\theta_3$ is within the bounds you prefer.

Nowhere in this procedure did we compute the usual $x$ and $y$ components of the standard "convert to Cartesian coordinates" method. But the coordinates of the two vectors in a system that is rotated by an angle $\theta_1$ are $(r_1,0)$ and $(r_2 \cos\alpha, r_2\sin\alpha)$; and if you look carefully, you can find all four of those components in the formulas above (in the sense that $0$ is always present as a constant term in any formula, and the other three components are all present as factors of terms). This method therefore could be considered a crypto-Cartesian conversion method (the coordinates are "hidden" in the formulas), although it does avoid some of the computation that the usual Cartesian conversion method would entail.

(Edited: this last paragraph is different from my previous conclusion.)


In contrast, the usual Cartesian-coordinates method is:

$$\begin{align} x_3 &= r_1 \cos\theta_1 + r_2 \cos\theta_2,\\ y_3 &= r_1 \sin\theta_1 + r_2 \sin\theta_2,\\ r_3 &= \sqrt{x_3^2 + y_3^2},\\ \theta_3 &= \begin{cases} \arctan\left( \frac {y_3}{x_3} \right) & \text{if }\ x_3 > 0, \\ \pi + \arctan\left( \frac {y_3}{x_3} \right) & \text{if }\ x_3 < 0, \\ \frac\pi2 & \text{if $x_3 = 0$ and $y_3 > 0$}, \\ -\frac\pi2 & \text{if $x_3 = 0$ and $y_3 < 0$}. \\ \end{cases} \end{align}$$ If you are programming this on a computer using a math library, there will typically be a function atan2(y_3, x_3) that will compute $\theta_3$ without requiring you to explicitly test the signs of $x_3$ and $y_3$.


The Cartesian method is simpler and (I think) less prone to error either when following the steps with pencil, paper, and a calculator or when programming it in a computer, which may explain its popularity.

I noticed, however, that the Cartesian method requires a square root and five trigonometric functions (two sines, two cosines, and an arc tangent) whereas the non-Cartesian method uses only a square root and three trigonometric functions (one sine, one cosine, and an arc sine). So the non-Cartesian method appears to be more computationally efficient. In fact, this suggests a special use of the Cartesian method, where the coordinate frame is first rotated so that one of the vectors lies along a Cartesian axis, and rotated back to the original frame after the vector has been computed in the rotated frame.

That is, the method can be:

$$\begin{align} \alpha &= \theta_2 - \theta_1, \\ u &= r_2 \cos\alpha,\\ v &= r_2 \sin\alpha,\\ r_3 &= \sqrt{\left( r_1 + u \right)^2 + v^2}, \\ \beta &= \text{atan2}(v, r_1 + u), \\ \theta_3 &= \theta_1 + \beta \end{align}$$ using the atan2 function to simplify the formula.

This is really a kind of "hybrid" method due to the use of $u$ and $v$, which are Cartesian coordinates of $v_2$ in a rotated coordinate frame, but I think it qualifies as "without first having to convert them to Cartesian or complex form".


If we assume that the expensive operations are trigonometry and square roots, then David K's "hybrid" solution can be optimized down to two trigonometric functions and a square root.

As before our vectors to sum are $(\theta_1,r_1)$ and $(\theta_2,r_2)$. Without loss of generality we can assume that $r_1\ge r_2$ and $\alpha=\theta_2-\theta_1 \in [0,\pi]$. (If the latter condition is not true, negate all the angles before and after adding).

Compute $u=r_2\cos\alpha$. Now, by the law of cosines, $$ r_3= \sqrt{r_1^2 + r_2^2 + 2r_1r_2\cos\alpha} =\sqrt{r_1(r_1+2u) + r_2^2}$$ For the angle, instead of using an arctangent (for which we would need $\sin\alpha$), we can find the cosine of twice the angle without any additional expensive operations. Namely, the cosine is the real part of $$ \left(\frac{r_1+r_2(\cos\alpha+i\sin\alpha)}{r_3}\right)^2 = \frac{r_1^2+r_2^2(\cos^2\alpha-\sin^2\alpha)+2r_1r_2\cos\alpha + i(\cdots)}{r_3^2}$$ and therefore $$ \beta = \frac12 \arccos \frac{r_1(r_1+2u)-r_2^2 + 2u^2}{r_3^2}$$ and as before $\theta_3=\theta_1+\beta$. The assumption that $r_1\ge r_2$ ensures that $\beta\in[0,\pi/2]$, so the value of the arccosine in $[0,\pi]$ is the one we need to halve.


But, as Dr. MV points out with more algebraic details, finding $\cos 2\beta$ instead of $\cos\beta$ is really a detour, since we've taken the square root anyway. In the notation of this answer, his computation just sets $$ \beta = \arccos\frac{r_1+u}{r_3} $$ which is even cheaper than the above.

This can also be justified without projecting to the $\theta_1$ axis, simply as using the Law of Cosines once again: $$ \beta = \arccos\frac{r_1^2+r_3^2-r_2^2}{2r_1r_3} $$ since (by the initial application of the Law of Cosines) we have $r_3^2-r_2^2=r_1^2+2r_1u$. This results in a completely non-Cartesian argument.


While correct mathematically, these are quite hard to code properly due to sign ambiguities and branch cuts in phase. This can be seen in that the argument of the arccos() does not depend on sign of the angle difference ($\phi_2-\phi_1$), it gives the same answer regardless. Thus when adding back $\phi_1$ to $\psi_2$ to get the angle of the result $\phi$, the resulting vector has the correct magnitude, but the angle is wrong if $\phi_2 \lt \phi_1$.

You can go through and code in the signs, but then you get another set of signs if the angle between the input vectors is more than $\pi$, and even this is complicated by the branch cut angle (difference between angles should be less than ±$\pi$). Essentially you need to know whether $\vec{r}_1 \times \vec{r}_2$ is left or right handed (negative or positive), and handle correctly.

You can make this work, as I worked through this afternoon in a fit of pique. BUT, the answer is less accurate than the cartesian result due to phase errors accumulating when the vectors nearly add to zero. So less accurate and lots of fiddly logic that is going to be slow. If you do go down this road I hope you are unit testing the bejesus out of it.

That said, you can make a chisq fitting algorithm work well, because that just wants the square of the vector distance, which avoids all the sign issues. Just use law of cosines and go to town.


The to/fro Cartesian conversion is

$$r=\sqrt{(r_1\cos(\phi_1)+r_2\cos(\phi_2))^2+(r_1\sin(\phi_1)+r_2\sin(\phi_2))^2)},$$ $$\phi=\arccos\left(\frac{r_1\cos(\phi_1)+r_2\cos(\phi_2)}r\right).$$ An easy simplification is possible by temporarily taking $\phi_1$ as the origin of the angles, and letting $\psi_1,\psi_2,\psi:=0,\phi_2-\phi_1,\phi-\phi_1$.

$$r=\sqrt{(r_1+r_2\cos(\psi_2))^2+r_2^2\sin^2(\psi_2)}=\sqrt{r_1^2+2r_1r_2\cos(\psi_2)+r_2^2},$$ $$\psi=\arccos\left(\frac{r_1+r_2\cos(\psi_2)}r\right).$$


For those preferring the arctangent-based angle evaluation,

$$\psi=\arctan\left(\frac{r_1+r_2\cos(\psi_2)}{r_2\sin(\psi_2)}\right),$$ which is slightly more costly.