What's the difference between $\mathbb{R}^2$ and the complex plane?
I haven't taken any complex analysis course yet, but now I have this question that relates to it.
Let's have a look at a very simple example. Suppose $x,y$ and $z$ are the Cartesian coordinates and we have a function $z=f(x,y)=\cos(x)+\sin(y)$. However, now I change the $\mathbb{R}^2$ plane $x,y$ to complex plane and make a new function, $z=\cos(t)+i\sin(t)$.
So, can anyone tell me some famous and fundamental differences between complex plane and $\mathbb{R}^2$ by this example, like some features $\mathbb{R}^2 $ has but complex plane doesn't or the other way around. (Actually I am trying to understand why electrical engineers always want to put signal into the complex numbers rather than $\mathbb{R}^2$, if a signal is affected by 2 components)
Thanks for help me out!
$\mathbb{R}^2$ and $\mathbb{C}$ have the same cardinality, so there are (lots of) bijective maps from one to the other. In fact, there is one (or perhaps a few) that you might call "obvious" or "natural" bijections, e.g. $(a,b) \mapsto a+bi$. This is more than just a bijection:
- $\mathbb{R}^2$ and $\mathbb{C}$ are also metric spaces (under the 'obvious' metrics), and this bijection is an isometry, so these spaces "look the same".
- $\mathbb{R}^2$ and $\mathbb{C}$ are also groups under addition, and this bijection is a group homomorphism, so these spaces "have the same addition".
- $\mathbb{R}$ is a subfield of $\mathbb{C}$ in a natural way, so we can consider $\mathbb{C}$ as an $\mathbb{R}$-vector space, where it becomes isomorphic to $\mathbb{R}^2$ (this is more or less the same statement as above).
Here are some differences:
- Viewing $\mathbb{R}$ as a ring, $\mathbb{R}^2$ is actually a direct (Cartesian) product of $\mathbb{R}$ with itself. Direct products of rings in general come with a natural "product" multiplication, $(u,v)\cdot (x,y) = (ux, vy)$, and it is not usually the case that $(u,v)\cdot (x,y) = (ux-vy, uy+vx)$ makes sense or is interesting in general direct products of rings. The fact that it makes $\mathbb{R}^2$ look like $\mathbb{C}$ (in a way that preserves addition and the metric) is in some sense an accident. (Compare $\mathbb{Z}[\sqrt{3}]$ and $\mathbb{Z}^2$ in the same way.)
- Differentiable functions $\mathbb{C}\to \mathbb{C}$ are not the same as differentiable functions $\mathbb{R}^2\to\mathbb{R}^2$. (The meaning of "differentiable" changes in a meaningful way with the base field. See complex analysis.) The same is true of linear functions. (The map $(a,b)\mapsto (a,-b)$, or $z\mapsto \overline{z}$, is $\mathbb{R}$-linear but not $\mathbb{C}$-linear.)
The big difference between $\mathbb{R}^2$ and $\mathbb{C}$: differentiability.
In general, a function from $\mathbb{R}^n$ to itself is differentiable if there is a linear transformation $J$ such that the limit exists:
$$\lim_{h \to 0} \frac{\mathbf{f}(\mathbf{x}+\mathbf{h})-\mathbf{f}(\mathbf{x})-\mathbf{J}\mathbf{h}}{\|\mathbf{h}\|} = 0$$
where $\mathbf{f}, \mathbf{x}, $ and $\mathbf{h}$ are vector quantities.
In $\mathbb{C}$, we have a stronger notion of differentiability given by the Cauchy-Riemann equations:
$$\begin{align*} f(x+iy) &\stackrel{\textrm{def}}{=} u(x,y)+iv(x,y) \\ u_x &= v_y, \\ u_y &= -v_x. \end{align*} $$
These equations, if satisfied, do certainly give rise to such an invertible linear transformation as required; however, the definition of complex multiplication and division requires that these equations hold in order for the limit
$$\lim_{h\ \to\ 0} \frac{f(z+h)-f(z)-Jh}{h} = 0$$
to exist. Note the difference here: we divide by $h$, not by its modulus.
In essence, multiplication between elements of $\mathbb{R}^2$ is not generally defined (although we could, if we wanted to), nor is division (which we could also attempt to do, given how we define multiplication). Not having these things means that differentiability in $\mathbb{R}^2$ is a little more "topological" -- we're not overly concerned with where $\mathbf{h}$ is, just that it gets small, and that a non-singular linear transformation exists at the point of differentiation. This all stems from the generalization of the inverse function theorem, which can basically be approached completely topologically.
In $\mathbb{C}$, since we can divide by $h$, because we have a rigorous notion of multiplication and division, we want to ensure that the derivative exists independent of the path $h$ takes. If there is some trickeration due to the path $h$ is taking, we can't wash it away with topology quite so easily.
In $\mathbb{R}^2$, the question of path independence is less obvious, and less severe. Such functions are analytic, and in the reals we can have differentiable functions that are not analytic. In $\mathbb{C}$, differentiability implies analyticity.
Example:
Consider $f(x+iy) = x^2-y^2+2ixy$. We have $u(x,y) = x^2-y^2$, and $v(x,y) = 2xy$. It is trivial to show that $$u_x = 2x = v_y, \\ u_y = -2y = -v_x,$$ so this function is analytic. If we take this over the reals, we have $f_1 = x^2-y^2$ and $f_2 = 2xy$, then $$J = \begin{pmatrix} 2x & -2y \\ 2y & 2x \end{pmatrix}.$$ Taking the determinant, we find $\det J = 4x^2+4y^2$, which is non-zero except at the origin.
By contrast, consider $f(x+iy) = x^2+y^2-2ixy$. Then,
$$u_x = 2x \neq -2x = v_y, \\ u_y = -2y \neq 2y = -v_x,$$
so the function is not differentiable.
However, $$J = \begin{pmatrix} 2x & 2y \\ -2y & -2x \end{pmatrix}$$ which is not everywhere singular, so we can certainly obtain a real-valued derivative of the function in $\mathbb{R}^2$.
I'll explain this more from an electrical engineer's perspective (which I am) than a mathematician's perspective (which I'm not).
The complex plane has several useful properties which arise due to Euler's identity:
$$Ae^{i\theta}=A(\cos(\theta)+i\sin(\theta))$$
Unlike points in the real plane $\mathbb{R}^2$, complex numbers can be added, subtracted, multiplied, and divided. Multiplication and division have a useful meaning that comes about due to Euler's identity:
$$Ae^{i\theta_1}\cdot{Be^{i\theta_2}}=ABe^{i(\theta_1+\theta_2)}$$
$$Ae^{i\theta_1}/{Be^{i\theta_2}}=\frac{A}{B}e^{i(\theta_1-\theta_2)}$$
In other words, multiplying two numbers in the complex plane does two things: multiplies their absolute values, and adds together the angle that they make with the real number line. This makes calculating with phasors a simple matter of arithmetic.
As others have stated,addition, subtraction, multiplication, and division can simply be defined likewise on $\mathbb{R}^2$, but it makes more sense to use the complex plane, because this is a property that comes about naturally due to the definition of imaginary numbers: $i^2=-1$.