Is complex analysis more "real" than real analysis?
Not all real degree $n$ polynomials have $n$ roots (counting multiplicity) because some of the roots are complex. In the real domain a matrix can have no eigenvalues, e.g. 2-dimensional rotation matrix, but any real matrix has a complex eigenvalue. These are manifestations of $\mathbb{C}$ unlike $\mathbb{R}$ being algebraically closed, i.e. every polynomial equation having a solution. In the real domain $\sqrt{x}$ and $\ln{x}$ are only defined for positive $x$ because for negative $x$ the value is a complex number, and it is not unique.
In the real domain exponents and trigonometric functions are completely different functions, but in the complex domain they are related by simple Euler formulas. The same goes for logarithms and inverse trigonometric functions. This is the main reason why identities for hyperbolic functions are almost the same as familiar trigonometric identities. Many definite integrals of functions that do not have elementary antiderivatives can be computed in elementary terms by extending the path of integration to the complex plane and using residues, e.g. $\int_0^\infty\frac{\ln x}{(1+x^2)^2}\,dx=-\frac{\pi}{4}$. More generally, integral and series representations of many real functions can be converted into each other because these functions extend into the complex plane and contour integrals there reduce to sums over residues. The Riemann zeta function is a typical beneficiary. These manifest another advantage of complex analysis over real one. Many commonly used real functions extend to holomorphic functions in the complex plane, and for holomorphic functions calculus tools are much stronger than for smooth ones, which is what real analysis mostly treats them as.
In the real domain ellipses and hyperbolas are different types of curves, but in the complex plane they are related by a rotation of axes, i.e. they are the 'same' (more precisely, we are looking at two different projections of the same complex curve). In a similar way spherical and hyperbolic geometries are related by a complex rotation. The Schrödinger equation of quantum mechanics and the heat equation of classical physics are also related by a complex rotation called Wick rotation. Path integral interpretation of quantum mechanical solutions can be made precise using this relation and the Feynman–Kac formula.
Heaviside developed operational calculus for solving ordinary differential equations with constant coefficients by treating time derivative as a 'variable' $p$ and writing solutions in terms of symbolic 'functions' of it. It turned out that the magic worked because $p$ is in fact a complex variable, and Heaviside's symbolic solutions can be converted into real ones by taking the inverse Laplace transform, which is a contour integral in the complex plane.
Harmonic functions, solutions to the Laplace equation, have many nice analytic properties like being sums of convergent power series, attaining extrema at the boundary of their domains, being equal at any point to the average of values on any circle centered at it, etc. The underlying reason is that harmonic functions are exactly the real and imaginary parts of holomorphic functions. If the potential of a vector field is a harmonic function $\varphi$ then its flow lines are level curves of another harmonic function $\psi$, exactly the one that makes $\varphi+i\psi$ holomorphic. Solution formulas for the Dirichlet boundary problem for the Laplace equation in some special domains are reflections of the Cauchy integral fomula for holomorphic functions that works in 'any' domain.
A notable example of how Complex Analysis reveals something deep about Physics is found in Spectral Theory. There is a type of conservation law where all of the singularities in the finite plane are related to properties of the singularity at $\infty$. For example, if you have a function that is holomorphic with a finite number of poles in the complex plane, then the sum of the residues can be computed by looking at the residue at $\infty$. Extensions of these ideas allow one to prove the completeness of eigenfunctions for an operator $A$ by replacing integrals around singularities of $(\lambda -A)^{-1}$ with one residue at $\infty$ computed as $\lim_{\lambda\rightarrow\infty}\lambda(\lambda-A)^{-1}=I$.
For example, if $A=\frac{1}{i}\frac{d}{dt}$ on $L^{2}[0,2\pi]$, on the domain $\mathcal{D}$ consisting of periodic absolutely continuous functions $f$, then $(\lambda -A)^{-1}$ is defined everywhere except at $\lambda = 0,\pm 1,\pm 2,\pm 3,\cdots$, where it has simple poles with residues $$ R_{n}f = \frac{1}{2\pi}\int_{0}^{2\pi}f(\theta')e^{-in\theta'}\,d\theta e^{in\theta}. $$ These residues are projections onto the eigenspaces associated the values of $\lambda$ where $(\lambda I -A)^{-1}$ is singular. Though the argument is very delicate, one can replace the sum of all these residues with one residue at $\infty$ evaluated as $\lim_{v\rightarrow\pm\infty}(u+iv)(u+iv-A)^{-1}f=f$, which leads to a proof of the $L^{2}$ convergence of the Fourier Series for an arbitrary $f \in L^{2}[0,2\pi]$: $$ f = \sum_{n=-\infty}^{\infty}\frac{1}{2\pi}\int_{0}^{2\pi}f(\theta')e^{-in\theta'}\,d\theta' e^{in\theta}. $$ This type of argument can be used to prove the convergence of Fourier-type transforms mixed with discrete eigenfunctions, too. And, these arguments where singularities of the resolvent in the finite plane are traded for a single residue at $\infty$ can give pointwise convergence results that go well beyond Functional Analysis and $L^{2}$ theory.
Such methods tie in beautifully with how Dirac viewed an observable as some kind of composite number with lots of possible values determined from the singularities of $\frac{1}{\lambda -A}$, which he then used to generalize the Cauchy integral formula $$ p(A)=\frac{1}{2\pi i}\oint_{C} p(\lambda)\frac{1}{\lambda -A}\,d\lambda. $$ In this strange setting of Complex Analysis, the results of Quantum eigenvector expansion are compelling and almost natural, even though the proofs are not so simple.
If a function is analytic on the upper half plane and goes to zero fast enough at infinity then the Kramers-Kronig relations hold. So if $x\rightarrow f(x)=f_1(x)+if_2(x)$ is our function, then
$$f_1(x)=\frac{1}{\pi}\cal{P}\int_{-\infty}^\infty\frac{f_2(x')}{x'-x}dx'$$ and $$f_2(x)=-\frac{1}{\pi}\cal{P}\int_{-\infty}^\infty\frac{f_1(x')}{x'-x}dx'$$
so you can compute the real part of $f$ from its imaginary part and vice versa.
In physics the impulse response of a physical system frequently satisfies the preconditions for the relations to hold. For example, in optics the real part of the impulse response is related to the refractive index and the imaginary part is related to attenuation. This makes it possible to compute the refractive index at varying frequencies just from knowing the attenuation at different frequencies. It's almost magic that these things are connected so straightforwardly in this way but it's hard to see if you don't go complex.
Other examples include the way audio engineers can read off the phase delay of a component (modulo certain preconditions) from its Bode plot and similar phenomena in the study of oscillating mechanical systems and in particle physics.
One example is the two-dimensional electrostatics. The potential in the domain $D$ without electric charges satisfies the Laplace equation $\varphi_{xx}+\varphi_{yy}=0$. From the 'real' point of view, its solutions for
two infinite uniformly charged planes
two infinite coaxial uniformly charged cylinders
have nothing to do with each other. However, from the 'complex' point of view one is obtained from the other by the conformal transformation $w(z)=e^z$ which maps the strip $a<\Re z<b$ onto the annulus $ e^a<|z|<e^b$.