Could trigonometry exist in one dimension?

Even though trigonometry is based on circles, and angles, both of which commonly exist in two dimensions, could it also exist in one dimension?

This question probably sounds really weird to you, but I was wondering in the same way that multiplication is sometimes visualised in two dimension as areas, it can also be visualised in one dimension - as scaling.

Maybe somewhere in the direction of waves? If you don't understand, ask me. Keep in mind I'm just a 16-year-old high school student.


Edit: OK, thanks for your answers guys! I realise now that spatial dimensions aren't required at all for the idea of trigonometry - because the whole concept of "two-dimensional space" can be reduced to either - the "matrix system" which compresses the concept of 2D space into a set of rules which can exist separately (albeit would seem very strange to a 1D life form), and similarly a space and time dimension.

This was all an enlightening experience for me, like math doesn't even require any dimensions to work - very cool!!! (but what about geometry?).


Solution 1:

This isn’t a weird question, and you don’t need circles or anything two-dimensional to have sines and cosines and everything non-geometric about trigonometry. It does help to have an idea of time in addition to the idea of numbers, which is sort of like having an extra dimension, but not the geometric one in the $xy$-plane.

Imagine that you have a little metal spring that’s 1" long and you stretch it from the point $0$ (where one end is attached permanently) and $1.1$ on the number line and then let go.

Assuming there’s no friction or anything else going on (the spring isn’t heating up and taking energy from the system, for example), it will just keep oscillating left and right forever like a wave. The way it oscillates depends on the properties of the spring, but if it’s a pretty normal spring that obeys the physical principle called Hooke’s law, the oscillation is what’s called “simple harmonic motion,” and the free end of the spring goes back and forth between the points 0.9 and 1.1 like clockwork.

Imagine that the spring is just the right stiffness so that it takes 0.1 seconds for the free end to move from 1.1 down to 0.9 and back. If you decided this was pretty exciting and you wanted to describe the position with a function, you might decide to define the springy function $f(s)$ so that the position of the free end $s$ seconds after you let got is exactly $f(s)$. If you also defined a function $g(s)$ for the speed after $s$ seconds, and you studied these functions a lot, you’d eventually figure out that there was a special number $p$ so that $(p\cdot (f(s)-1))^2+g(s)^2$ was constant, no matter what $s$ was, and if you calculated $p$, you’d find out it was 62.831853 (does that look a multiple of anything familiar?) and find out all kinds of interesting things about $f$ and $g$.

Solution 2:

The concept of angles, sines, and cosines, etc are well established for $\mathbb{R}^n$ for $n\geq 2$ when talking about the usual geometric interpretation of angles between straight lines using what we refer to as the "dot product" between two vectors in $\mathbb{R}^n$. For a vector $u$ and a vector $v$ we have that $u\cdot v = \|u\|\|v\|\cos\theta$.

A specific example of dot product would be in the case of a 3-4-5 triangle. The long leg can be thought of as going 4 spaces to the right, and the hypotenuse can be thought of as going 4 spaces to the right and 3 spaces up. You have then that $[4,0]\cdot[4,3] = 4\cdot4 + 0\cdot 3 = 16$. Also $[4,0]\cdot[4,3] = \|[4,0]\|\|[4,3]\|\cos\theta = \sqrt{4^2 + 0^2}\cdot \sqrt{4^2 + 3^2}\cdot \cos\theta = 4\cdot 5\cos\theta$. Combining these pieces of information you have that $16 = 20 \cos\theta$ and that $\cos\theta = \frac{4}{5}$, agreeing with our definition of cosine of an angle as being adjacent over hypotenuse.

Through abuse of notation, or to simply keep the number of technical terms lower, we can also refer to "angles" and such from a purely algebraic point of view. Our dot product from before is a specific example of what we also call an "inner product" and a vector space which has an inner product we call an "inner product space."

Inner products are commonly denoted as $\langle ~~, ~~ \rangle$ and you have the well known Cauchy-Schwarz inequality that $(\langle u, v\rangle)^2 \leq\langle u,u\rangle\langle v,v\rangle = \|u\|^2\|v\|^2$. Specifically, we can say that $\langle u, v\rangle = \|u\|\|v\|\cos\theta$ where $\cos\theta$ is the number that happens to make that equality true. In other words, $\theta = \arccos(\frac{\langle u, v\rangle}{\|u\|\|v\|})$. It is through this definition of an angle, $\theta$, that we can talk about the "angle" between elements of our inner product space, whether they are vectors, numbers, functions, sequences, or whatever other abstract notion of numbers we are using at the time.

There are many wide varieties of inner product spaces and they can be of any dimension (including one-dimensional and infinite dimensional). Unfortunately, the inner product defined on one-dimensional Euclidean space is quite boring, and is precisely the normal multiplication that you are already familiar with (or some rescaling thereof) and the only possible angles are 0 if they are both positive or both negative, or $\pi$ (in radians) if one is positive and the other negative. This is because you can span the entire space with a single basis element. Things can become much more exciting when taking spaces in higher dimension and the theory behind all of this is highly important in hundreds of applications including signal processing, string theory, and probability theory.