Curves in $\mathbb{R} ^3 $ [duplicate]
A circle is indeed planar, and has constant nonzero curvature, but the torsion of a circle is zero; it's not an exception.
Having said this, let $\gamma(s)$ be a regular planar curve in $\Bbb R^3$ parametrized by arc length, say $\gamma:I \to \Bbb R^3$, where $I \subset \Bbb R$ is an open interval. $\gamma(s)$ regular means
$\dot{\gamma}(s) = \dfrac{d\gamma(s)}{ds} \ne 0 \tag{1}$
for any $s \in I$. If $P \subset \Bbb R^3$ is any plane passing through the point $\vec p_0 \in \Bbb R^3$, then the points $\vec r = (x, y, z) \in P$ satisfy an equation of the form
$(\vec r - \vec p_0) \cdot \vec n = 0, \tag{2}$
where $\vec n$ is the unit normal vector to $P$; that $P$ may be so described is well-known, and will be taken so here without further demonstration. If we insert $\gamma(s) = (\gamma_x(s), \gamma_y(s), \gamma_z(s))$ into (2), we see that
$(\gamma(s) - \vec p_0) \cdot \vec n = 0, \tag{3}$
and differentiating (3) we obtain
$\dot \gamma(s) \cdot \vec n = 0. \tag{4}$
We next recall that
$\dot \gamma(s) = \vec T(s), \tag{5}$
where $\vec T(s)$ is the unit tangent vector to $\gamma(s)$, $\vec T(s) = \dot \gamma(s)$ (since $s$ is arc-length); then by (4)
$\vec T(s) \cdot \vec n = 0; \tag{6}$
furthermore, by the Frenet-Serret equation
$\dot {\vec T}(s) = \kappa \vec N(s), \tag{7}$
applied to (6) we have
$\kappa(s) \vec N(s) \cdot \vec n = \dot {\vec T}(s) \cdot \vec n = 0; \tag{8}$
from (8) we see that, as long as $\kappa(s) \ne 0$, that is, as long as $\vec N(s)$ may be defined, we have
$\vec N(s) \cdot \vec n = 0 \tag{9}$
holding as well as (6); thus both $\vec T(s)$ and $\vec N(s)$ are normal to $\vec n$ as long as they are defined. Now $T(s)$ and $N(s)$ form an orthonormal system; that is $\Vert \vec T(s) \Vert = \Vert \vec N(s) \Vert = 1$ and $\vec T(s) \cdot \vec N(s) = 0$, and since the unit binormal vector along $\gamma(s)$, $\vec B(s) = \vec T(s) \times \vec N(s)$ also satisifies
$\vec B(s) \cdot \vec T(s) = (\vec T(s) \times \vec N(s)) \cdot \vec T(s) = 0, \tag{10}$
$\vec B(s) \cdot \vec N(s) = (\vec T(s) \times \vec N(s)) \cdot \vec N(s) = 0, \tag{11}$
$\vec B(s) \cdot \vec B(s) = 1, \tag{12}$
we may conclude from the continuity of
$\vec B(s)$ that $\vec B(s) = \pm \vec n, \tag{13}$
a constant; thus from
$\dot {\vec B}(s) = -\tau(s) \vec N(s), \tag{14}$
the Frenet-Serret equation for $\dot {\vec B}(s)$, we may infer that
$\tau(s) \vec N(s) = 0 \Rightarrow \tau(s) = 0, \tag{15}$
since $\vec N(s) \ne 0$ wherever it is defined; we have shown that the torsion $\tau(s)$ of any plane curve $\gamma(s)$ vanishes.
Of course, there are a couple of caveats in the above argument, most notably the assumptions of regularity (so that $\vec T(s)$ exists), and non-vanishing curvature (so that $\vec N(s)$ exists); but I think these can be covered pretty easily; I'll defer the discussion until after we have handled the "if" direction of the assertion's logic.
So suppose $\tau(s) = 0$; that is, that $\gamma(s)$ is a regular curve in $\Bbb R^3$ with vanishing torsion. Then $\vec B(s)$ must be constant along $\gamma(s)$, by (14); choosing $s_0 \in I$ we have $\vec B(s_0) = \vec B(s)$ for all $s \in I$; thus by (10)-(11) $\vec T(s)$ and $\vec N(s)$ both belong to the subspace $V \subset \Bbb R^3$ with $\vec B(s_0) \bot V$; indeed, we may take this subspace to be spanned by $\vec T(s_0)$, $\vec N(s_0)$, since they form an orthonormal pair in $V$; $V = \text{span} \{ \vec T(s_0), \vec N(s_0) \}$. This being the case, we may write
$\dot {\gamma}(s) = \vec T(s) = \langle \vec T(s), \vec T(s_0) \rangle \vec T(s_0) + \langle \vec T(s), \vec N(s_0) \rangle \vec N(s_0); \tag{16}$
upon integrating (16) we find
$\gamma(s) - \gamma(s_0) = \displaystyle \int_{s_0}^s \dot {\gamma}(u) du = \int_{s_0}^s \vec T(u) du$ $= \left (\displaystyle \int_{s_0}^s \langle \vec T(u), \vec T(s_0) \rangle du \right ) \vec T(s_0) + \left (\displaystyle \int_{s_0}^s \langle \vec T(u), \vec N(s_0) \rangle du \right ) \vec N(s_0), \tag{17}$
which implies that
$(\gamma(s) - \gamma(s_0)) \cdot \vec B(s_0)$ $ =(\int_{s_0}^s \langle \vec T(u), \vec T(s_0) \rangle du) \langle \vec T(s_0), \vec B(s_0) \rangle$ $+ (\int_{s_0}^s \langle \vec T(u), \vec N(s_0) \rangle du) \langle \vec N(s_0), \vec B(s_0) \rangle = 0 \tag{18}$
for all $s \in I$; but the equation of the plane normal to $\vec B(s_0)$ passing through the point $\gamma(s_0)$ is in fact
$(\vec r - \gamma(s_0)) \cdot \vec B(s_0) = 0, \tag{19}$
where $\vec r = (x, y, z)$. Thus $\gamma(s)$ lies in this plane. QED.
We can actually take things a step further and present concise formulas for $\vec T(s)$ and $\vec N(s)$ in terms of $\displaystyle \int_{s_0}^s \kappa(u)du$ as follows: When $\tau(s) = 0$, the Frenet-Serret equations become
$\dot{\vec T}(s) = \kappa(s) \vec N(s), \tag{20}$
$\dot{\vec N}(s) = -\kappa(s) \vec T(s), \tag{21}$
and
$\dot {\vec B}(s) = 0. \tag{22}$
(22) implies $B(s)$ is constant; inspecting (20)-(21) reveals they may be written in combined form by introducing the six-dimensional column vector $\vec \Theta(s)$:
$\vec \Theta(s) = (\vec T(s), \vec N(s))^T, \tag{23}$
so that
$\dot {\vec \Theta}(s) = (\dot {\vec T}(s), \dot {\vec N}(s))^T; \tag{24}$
with this convention, (20)-(21) may be written
$\dot {\vec {\Theta}}(s) = \begin{bmatrix} 0 & \kappa(s)I_3 \\ -\kappa(s)I_3 & 0 \end{bmatrix} \vec {\Theta}(s) = \kappa(s) J \vec{\Theta}(s), \tag{25}$
where $I_3$ is the $3 \times 3$ identity matrix and
$J = \begin{bmatrix} 0 & I_3 \\ -I_3 & 0 \end{bmatrix}; \tag{26}$
here it is understood that $J$ is presented in the from of $3 \times 3$ blocks. It is easy to see that
$J^2 = \begin{bmatrix} 0 & I_3 \\ -I_3 & 0 \end{bmatrix}\begin{bmatrix} 0 & I_3 \\ -I_3 & 0 \end{bmatrix} = \begin{bmatrix} -I_3 & 0 \\ 0 & -I_3 \end{bmatrix} = -I_6, \tag{27}$
$I_6$ being the $6 \times 6$ identity matrix. Careful scrutiny of (25) suggests that
$\vec \Theta(s) = \exp \left (\left ( \displaystyle \int_{s_0}^s \kappa(u) du \right ) J \right ) \vec \Theta(s_0) \tag{28}$
might be its unique solution taking the value $\vec \Theta(s_0)$ at $s = s_0$; indeed, we may differentiate (28) with respect to $s$ to obtain
$\dot {\vec \Theta}(s) = \dfrac{d}{ds}\left (\displaystyle \int_{s_0}^s \kappa(u)du \right )J\exp \left (\left ( \displaystyle \int_{s_0}^s \kappa(u) du \right ) J \right ) \vec \Theta(s_0) = \kappa(s) J \vec \Theta(s), \tag{29}$
showing that (28) satisfies (25); furthermore (28) is consistent with the initial condition at $s = s_0$;
$\vec \Theta (s_0) = \exp \left (\left ( \displaystyle \int_{s_0}^{s_0} \kappa(u) du \right ) J \right ) \vec \Theta(s_0) = e^{0J} \vec \Theta(s_0) = \vec \Theta(s_0). \tag{30}$
It is worth pointing out that the reason (28) works as a solution is basically that the $s$-derivative of the matrix $\exp \left (\left ( \displaystyle \int_{s_0}^s \kappa(u) du \right ) J \right )$ follows the scalar pattern
$\dfrac{d}{ds}e^{u(s)} = \dfrac{du(s)}{ds}e^{u(s)}, \tag{31}$
viz.
$\dfrac{d}{ds}\exp \left (\left ( \displaystyle \int_{s_0}^s \kappa(u) du \right ) J \right ) = \dfrac{d}{ds}\left (\displaystyle \int_{s_0}^s \kappa(u) du \right)J\exp \left (\left ( \displaystyle \int_{s_0}^s \kappa(u) du \right ) J \right )$ $= \kappa(s)J\exp \left (\left ( \displaystyle \int_{s_0}^s \kappa(u) du \right ) J \right ). \tag{32}$
(32) applies by virtue of the fact that $\left ( \displaystyle\int_{s_0}^s \kappa(u) du \right)J$ and its derivative $\kappa(s) J$ commute with one another, being scalar function multiples of the same matrix $J$; for general matrix functions $A(s)$, it is not true that $A'(s)A(s) = A(s)A'(s)$, and the evaluation of $(d/ds)A(s)$ becomes much more complicated; we do not in general have
$\dfrac{d}{ds}e^{A(s)} = \dfrac{A(s)}{ds}e^{A(s)} \tag{33}$
in parallel with the scalar formula (31); the interested reader may consult my answer to this question (especially the material surrounding equations (15)-(20)) for a more detailed discussion. However, under the special circumstances that $A(s) = f(s)B$ for a constant matrix $B$, then $A'(s) = f'(s)B$ and $A(s)A'(s) = f(s)f'(s)B^2 = A'(s)A(s)$; $A(s)$ and its derivative always commute in this special case, which is what we have here. (32) applies and thus we have that (28) solves (25).
We examine the matrix $\exp \left (\left ( \displaystyle \int_{s_0}^s \kappa(u) du \right ) J \right )$ occurring in (28) with an eye to determining its structure, and the structure of the solutions to (25). That $J^2 = - I_6$ has been noted. Thus we have
$J^2 = -I_6; \; \; J^3 = J^2J = -J; \;\;$ $J^4 = J^3J= -J^2 = I_6; \; \; J^5 = (J^4)J = I_6J = J, \tag{34}$
and in general,
$J^{4n + p} = J^{4n}J^p = (J^4)^nJ^p = (I_6)^n J^p = J^p, \tag{35}$
which shows that all cases of $J^m$, $m \in \Bbb Z$, are in fact covered by (34), i.e. for $0 \le p \le 3$. If we expand the matrix $\exp \left (\left ( \displaystyle \int_{s_0}^s \kappa(u) du \right ) J \right )$ as a power series
$\exp \left (\left ( \displaystyle \int_{s_0}^s \kappa(u) du \right ) J \right )$ $= \displaystyle \sum_0^\infty \dfrac{\left ( \left (\displaystyle \int_{s_0}^s \kappa(u) du) \right )J \right )^n}{n!} = \sum_0^\infty \dfrac{\left (\displaystyle \int_{s_0}^s \kappa(u) du \right )^nJ^n}{n!}, \tag{36}$
we may decompose the right-hand sum in accord with the periodicity relations of the powers of $J$, (34)-(35), as follows: we first observe that the pattern of powers of $J$ is essentially the same as that of the powers of the ordinary complex number $i$, viz.,
$i^2 = -1; \; \; i^3 = i^2i = -i; \;\;$ $i^4 = i^3i = -i^2 = 1; \; \; i^5 = (i^4)i = 1i = i, \tag{37}$
$i^{4n + p} = i^{4n}i^p = (i^4)^ni^p = 1^n i^p = i^p, \tag{38}$
which give rise to the well-known Euler formula
$e^{ix} = \cos x + i\sin x \tag{39}$
for
$e^{ix} = \displaystyle \sum_0^\infty \dfrac{(ix)^n}{n!} \tag{40}$
when this sum is split into its real and imaginary parts; that is,
$\Re[e^{ix}] = \cos x = \displaystyle \sum_0^\infty (-1)^n \dfrac{x^{2n}}{(2n)!} \tag{41}$
and
$\Im[e^{ix}] = \sin x = \displaystyle \sum_0^\infty (-1)^n \dfrac{x^{2n + 1}}{(2n + 1)!}, \tag{42}$
To be continued/completed; stay tuned!!?