When the trig functions moved from the right triangle to the unit circle?

I have to write a paper about the unit circle and I'm trying to uncover some of its origins.

Also, when the trig functions were expanded to angles greater than $90^{\circ}$ and what was the rationale behind it? Also, why mirror the right triangle along the axis instead of just moving the base of the right triangle to the next axis $(+x\to+y\to-x\to-y\to+x)$?

I had no luck trying to find that online, and all the books that I looked into would jump right into the practical part of it.


Solution 1:

[I have no idea about the history --though I do have interest, and would like the OP to post whatever his research reveals-- but this is how I like to approach the transition pedagogically. Pardon the length; I'll attempt to edit things down later.]

We'll enter the discussion at the point where we agree that $\sin\theta$ and $\cos\theta$ are perfectly well-understood for each acute (and, we'll say, non-zero) angle $\theta$, via a right triangle with $\theta$ at one non-right vertex. (We avoid $\theta=0^{\circ}$ for the same reason we avoid $\theta=90^{\circ}$: something doesn't seem quite proper about the triangle involved.) The lore of First Quadrant Trig is fairly rich, with plenty of identities and formulas, many (most? all?) of which have picture-proofs. For instance, there's a lot to learn from the similar and right triangles in the figure I call the "Complete Triangle"; also, the "Law of Cosines" (for acute triangles, anyway); even the power series representations of at least four of the six trig functions.

Most importantly here, First Quadrant Trig includes the elegantly-illustrated Angle-Sum and Angle-Difference Formulas (taken from one of my previous answers):

Picture Proof: Angle-Sum and -Difference Formulas

Of course, the figures only seem make to make sense for $\alpha$, $\beta$, $\alpha+\beta$, and $\alpha-\beta$ (strictly) between $0^{\circ}$ and $90^{\circ}$ ... but when they make sense, boy do they make sense! Interestingly, the formulas they represent allow our knowledge to expand beyond the comfort zone of the First Quadrant.

For instance, while we may nor may not have felt comfortable defining sine and cosine for a right angle (You can't have two right angles in a triangle!), the right-hand sides of the Angle Addition Formula have no problems churning out consistent values when $\beta$ is the complement of $\alpha$ (if you will, the "co-$\alpha$"), even if the left-hand sides seem non-sensical:

$$\begin{align} \text{“}\sin 90^\circ\text{''} &= \sin{(\alpha+\text{co-}\alpha)} \\ &= \sin\alpha \cos(\text{co-}\alpha)+\cos\alpha\sin(\text{co-}\alpha) = \sin^2\alpha + \cos^2\alpha = 1 \\ \text{“}\cos 90^\circ\text{''} &= \cos{(\alpha+\text{co-}\alpha)} \\ &= \cos\alpha \cos(\text{co-}\alpha)-\sin\alpha\sin(\text{co-}\alpha) = \cos\alpha \sin\alpha - \sin\alpha \cos\alpha = 0 \\ \end{align}$$

What these formulas tell us is that, if the symbols "$\sin 90^\circ$" and "$\cos 90^\circ$" are going to mean anything (and still be consistent with what we've come to understand about Angle Addition), they must mean "$1$" and "$0$", respectively. Likewise, the Angle Subtraction Formulas (with $\beta = \alpha$) provide the other edge-case values:

$$\begin{align} \text{“}\sin 0^\circ\text{''} &= \sin{(\alpha-\alpha)} = \sin\alpha \cos\alpha-\cos\alpha\sin\alpha = 0 \\ \text{“}\cos 0^\circ\text{''} &= \cos{(\alpha-\alpha)} = \cos\alpha \cos\alpha+\sin\alpha\sin\alpha = 1 \end{align}$$

Of course, there's always the possibility that these symbols really don't mean anything. However, there don't seem to be any obvious contradictions with known First Quadrant Lore; indeed, the purported right angle values allow the Angle Subtraction Formula to re-confirm what we knew "from definition":

$$\sin(\text{co}\theta)=\cos\theta \qquad \cos(\text{co}\theta)=\sin\theta$$

and the ostensible zero angle values are certainly consistent with common sense statements like

$$\sin(\theta+0)=\sin\theta \qquad \cos(\theta-0) = \cos\theta$$

All things considered, there's not much controversy here, so we have little problem augmenting our strict right-triangle definition of trig functions and accepting the computed values for right angles and zero angles.

In the same way, we can use the Formulas (and our newly-christened right angle values) to explore the Second Quadrant: we simply throw our First Quadrant angles over the wall. For example,

$$\begin{align} \sin(\theta+90^{\circ}) &= \sin\theta \cos 90^{\circ} + \cos\theta \sin 90^{\circ} = \cos\theta \end{align}$$

This result makes some sense: The "vertical shadow" of a unit segment rotated to angle $\theta + 90^{\circ}$ matches the "horizontal shadow" of a unit segment rotated only to angle $\theta$; I bet it works the other way, too ... Hey, waydaminnit ...

$$\begin{align} \cos(\theta+90^{\circ}) &= \cos\theta \cos 90^{\circ} - \sin\theta \sin 90^{\circ} = -\sin\theta \end{align}$$

... We get a negative ? What's up with that?

At this point, we have two choices: retreat from the insanity, or embrace it. It turns out that the latter is the better course to take here: that single, tiny, intuition-shattering negative sign is the key to understanding how First Quadrant Trig extends to All Quadrants Trig.

You know the story from here: We use the Angle Addition Formulas to push from the Second Quadrant to the Third, to the Fourth, and beyond; and use the Angle Subtraction Formulas to assign trig values to negative angles. And we begin to make interesting observations that bolster our confidence in these values:

  • sine values are signed just like $y$ coordinates in each quadrant; cosine values just like $x$ coordinates; kinda convenient, that.
  • values repeat as we go all the way around the circle, because Angle Addition ultimately assures $\sin(\theta+360^{\circ}k) = \sin\theta$ and $\cos(\theta+360^{\circ}k) = \cos\theta$.
  • the triangle area formula $\frac{1}{2} a b \sin C$ now works for any-size $C$
  • the Law of Sines and Law of Cosines work for any triangle
  • the way is paved for complex exponentials, non-Euclidean geometry, Fourier analysis, etc, etc, etc.

It's a good thing we didn't let that negative sign scare us off!

Sure, we're forced to abandon the idea that the sine and cosine of an arbitrary $\theta$ should come from right triangles with angle $\theta$, but the gains from our expanded perspective more than make up for that. We all out-grow the training wheels sometime.

As I've admitted, I don't know how this conceptual progression matches with the actual history of trigonometry's development. However, I like using this approach as an object lesson in how mathematics often advances: we play around with intuitively-appealing notions, understand the heck out of that stuff by observing patterns, and let those observations guide exploration beyond our intuition's limits. The tail, as they say, wags the dog.

Of course, there are other engines driving mathematical advancement, too, but this seems to appeal to students. It helps makes the case that math is dynamic, subject to refinement (or overhaul) with every new discovery, and that its study is a never-ending and not-always-predictable journey.

Solution 2:

@Luiz: Early trigonometry was based on a large circle (to reduce fractions). Sines and cosines were line segments in that circle. Tables listed the lengths of those lines for angles between 0 and 90 degrees. For a detailed history of trigonometry up to 1550, see The Mathematics of the Heavens and the Earth: The Early History of Trigonometry by Glen van Brummelen, Princeton University Press, 2009.

The year 1748 was a watershed year for the history of trigonometry, for in that year Euler published his Introductio in analysin infinitorum (English translation by John Blanton, Introduction to Analysis of the Infinite, Springer 1988). Euler changed the objects studied in calculus from curves to functions. In Chapter VIII, he introduces trigonometric FUNCTIONS on the unit circle. If you read this chapter, it all seems ho-hum; that is because we have adopted Euler's approach. At the time it was revolutionary.

This change allowed Euler to differentiate the trigonometric functions. See his Institutiones calculi differentialis, 1755, section 201. (English translation by Blanton, Foundations of Differential Calculus Springer 2000). Amazingly, although both Newton and Leibniz had power series for sine and cosine, they did not differentiate; there was no curve, so no tangent line to find.

Euler's approach was quickly adopted by the research mathematicians and led to new fields of mathematics such as Fourier series and Cantor's work on set theory.

Solution 3:

This is only a partial answer. One wishes to study periodic phenomena. One uses Fourier series. Those involve periodic trigonometric functions.

Think of a spinning wheel. It rotates at 1 degree per second. How much has it rotated in the past $10$ hours? Certainly more than a right angle---indeed many revolutions. Rotating wheels are things that need to be understood mathematically, and trigonometric functions extended to larger domains than from zero to a right angle are just what does it.

Heat flow and diffusion are not periodic phenomena, but Fourier showed how Fourier series involving periodic trigonometric functions are also the key to understanding those.

Norman Wildberger's "rational trigonometry" described in his book of that title might be described as what's left of "our" trigonometry after you discard the particular parametrization of the circle that we use, i.e. parametrization by arc length, and you don't replace it by any other. And that means you don't speak of circles or of measures of angles much if at all. You do speak of the "spread" between two lines. The spread is the square of the sine of the angle, but of course that's not how it's defined from within the topic. If can be characterized as a certain rational function of the slopes.