There are a few good answers to this question, depending on the audience. I've used all of these on occasion.

A way to solve polynomials

We came up with equations like $x - 5 = 0$, what is $x$?, and the naturals solved them (easily). Then we asked, "wait, what about $x + 5 = 0$?" So we invented negative numbers. Then we asked "wait, what about $2x = 1$?" So we invented rational numbers. Then we asked "wait, what about $x^2 = 2$?" so we invented irrational numbers.

Finally, we asked, "wait, what about $x^2 = -1$?" This is the only question that was left, so we decided to invent the "imaginary" numbers to solve it. All the other numbers, at some point, didn't exist and didn't seem "real", but now they're fine. Now that we have imaginary numbers, we can solve every polynomial, so it makes sense that that's the last place to stop.

Pairs of numbers

This explanation goes the route of redefinition. Tell the listener to forget everything he or she knows about imaginary numbers. You're defining a new number system, only now there are always pairs of numbers. Why? For fun. Then go through explaining how addition/multiplication work. Try and find a good "realistic" use of pairs of numbers (many exist).

Then, show that in this system, $(0,1) * (0,1) = (-1,0)$, in other words, we've defined a new system, under which it makes sense to say that $\sqrt{-1} = i$, when $i=(0,1)$. And that's really all there is to imaginary numbers: a definition of a new number system, which makes sense to use in most places. And under that system, there is an answer to $\sqrt{-1}$.

The historical explanation

Explain the history of the imaginary numbers. Showing that mathematicians also fought against them for a long time helps people understand the mathematical process, i.e., that it's all definitions in the end.

I'm a little rusty, but I think there were certain equations that kept having parts of them which used $\sqrt{-1}$, and the mathematicians kept throwing out the equations since there is no such thing.

Then, one mathematician decided to just "roll with it", and kept working, and found out that all those square roots cancelled each other out.

Amazingly, the answer that was left was the correct answer (he was working on finding roots of polynomials, I think). Which lead him to think that there was a valid reason to use $\sqrt{-1}$, even if it took a long time to understand it.


The concept of mathematical numbers and "existing" is a tricky one. What actually "exists"?

Do negative numbers exist? Of course they do not. You can't have a negative number of apples.

Yet, the beauty of negative numbers is that when we define them (rigorously), then all of a sudden we can use them to solve problems we were never ever able to solve before, or we can solve them in a much simpler way.

Imagine trying to do simple physics without the idea of negative numbers!

But are they "real"? Do they "exist"? No, they don't. But they are just tools that help us solve real life problems.

To go back to your question about complex numbers, I would say that the idea that they exist or not has no bearing on whether they are actually useful in solving the problems of every day life, or making them many, many, many times more easy to solve.

The math that makes your computer run involves the tool that is complex numbers, for instance.


One need only consult the history of algebra to find many informal discussions on the existence and consistence of complex numbers. Any informal attempt to justify the existence of $\mathbb C$ will face the same obstacles that existed in earlier times. Namely, the lack of any rigorous (set-theoretic) foundation makes it difficult to be precise - both syntactically and semantically. Nowadays the set-theoretic foundation of algebraic structures is so subconscious that is is easy to overlook just how much power it provides for such purposes. But this oversight is easily remedied. One need only consult some of the older literature where even leading mathematicians struggled immensely to rigorously define complex numbers. For example, see the quote below by Cauchy and Hankel's scathing critique - which is guaranteed to make your jaw drop! (Below is an excerpt from my post on the notion of formal polynomial rings and their quotients).

A major accomplishment of the set-theoretical definition of algebraic structures was to eliminate imprecise syntax and semantics. Eliminating the syntactic polynomial term $\rm\ a+b\cdot x+c\cdot x^2\ $ and replacing it by its rigorous set-theoretic semantic reduction $\rm\:(a,b,c,0,0,\ldots)\:$ eliminates many ambiguities. There can no longer be any doubt about the precise denotation of the symbols $\rm\: x,\; +,\;\cdot\:,$ or about the meaning of equality of polynomials, since, by set theoretic definition, tuples are equal iff their components are equal. The set-theoretic representation ("implementation") of these algebraic objects gives them rigorous meaning, reducing their semantics to that of set-theory.

Similarly for complex numbers $\rm\,a + b\cdot {\it i}\ $ and their set-theoretic representation by Hamilton as pairs of reals $\rm\,(a,b).\,$ Before Hamilton gave this semantic reduction of $\,\mathbb C\,$ to $\Bbb R^2,\,$ prior syntactic constructions (e.g. by Cauchy) as formal expressions or terms $\rm\:a+b\cdot {\it i}\:$ were subject to heavy criticism regarding the precise denotation of their constituent symbols, e.g. precisely what is the meaning of the symbols $\rm\;{\it i},\, +,\, =\,?\, $ In modern language, Cauchy's construction of $\mathbb C$ is simply the the quotient ring $\rm\:\mathbb R[x]/(x^2+1)\cong \Bbb R[{\it i}],\,$ which he described essentially as real polynomial expressions modulo $\rm\:x^2+1\:$. However, in Cauchy's time mathematics lacked the necessary (set-theory) foundations to rigorously define the syntactic expressions comprising the polynomial ring term-algebra $\rm\mathbb R[x]$, and its quotient ring of congruence classes $\rm\:(mod\ x^2+1).\,$ The best that Cauchy could do was to attempt to describe the constructions in terms of imprecise natural (human) language, e.g, in 1821 Cauchy wrote:

In analysis, we call a symbolic expression any combination of symbols or algebraic signs which means nothing by itself but which one attributes a value different from the one it should naturally be [...] Similarly, we call symbolic equations those that, taken literally and interpreted according to conventions generally established, are inaccurate or have no meaning, but from which can be deduced accurate results, by changing and altering, according to fixed rules, the equations or symbols within [...] Among the symbolic expressions and equations whose theory is of considerable importance in analysis, one distinguishes especially those that have been called imaginary. $\quad$ -- Cauchy, Cours d'analyse,1821, S.7.1

While nowadays, using set theory, we can rigorously interpret such "symbolic expressions" as terms of formal languages or term algebras, it was far too imprecise in Cauchy's time to have any hope of making sense to his colleagues, e.g. Hankel replied scathingly:

If one were to give a critique of this reasoning, we can not actually see where to start. There must be something "which means nothing," or "which is assigned a different value than it should naturally be" something that has "no sense" or is "incorrect", coupled with another similar kind, producing something real. There must be "algebraic signs" - are these signs for quantities or what? as a sign must designate something - combined with each other in a way that has "a meaning." I do not think I'm exaggerating in calling this an unintelligible play on words, ill-becoming of mathematics, which is proud and rightly proud of the clarity and evidence of its concepts. $\quad$-- Hankel

Thus it comes as no surprise that Hamilton's elimination of such "meaningless" symbols - in favor of pairs of reals - served as a major step forward in placing complex numbers on a foundation more amenable to his contemporaries. Although there was not yet any theory of sets in which to rigorously axiomatize the notion of pairs, they were far easier to accept naively - esp. given the already known closely associated geometric interpretation of complex numbers. Hamilton introduced pairs as 'couples' in 1837 [1]:

p. 6: The author acknowledges with pleasure that he agrees with M. Cauchy, in considering every (so-called) Imaginary Equation as a symbolic representation of two separate Real Equations: but he differs from that excellent mathematician in his method generally, and especially in not introducing the sign sqrt(-1) until he has provided for it, by his Theory of Couples, a possible and real meaning, as a symbol of the couple (0,1)

p. 111: But because Mr. Graves employed, in his reasoning, the usual principles respecting about Imaginary Quantities, and was content to prove the symbolical necessity without showing the interpretation, or inner meaning, of his formulae, the present Theory of Couples is published to make manifest that hidden meaning: and to show, by this remarkable instance, that expressions which seem according to common views to be merely symbolical, and quite incapable of being interpreted, may pass into the world of thoughts, and acquire reality and significance, if Algebra be viewed as not a mere Art or Language, but as the Science of Pure Time. $\quad$ -- Hamilton, 1837

Not until the much later development of set-theory was it explicitly realized that ordered pairs and, more generally, n-tuples, serve a fundamental foundational role, providing the raw materials necessary to construct composite (sum/product) structures - the raw materials required for the above constructions of polynomial rings and their quotients. Indeed, as Akihiro Kanamori wrote on p. 289 (17) of his very interesting paper [2] on the history of set theory:

In 1897 Peano explicitly formulated the ordered pair using $\rm\:(x, y)\:$ and moreover raised the two main points about the ordered pair: First, equation 18 of his Definitions stated the instrumental property which is all that is required of the ordered pair:

$$\rm (x,y) = (a,b) \ \ \iff \ \ x = a \ \ and\ \ y = b $$

Second, he broached the possibility of reducibility, writing: "The idea of a pair is fundamental, i.e., we do not know how to express it using the preceding symbols."

Once set-theory was fully developed one had the raw materials (syntax and semantics) to provide rigorous constructions of algebraic structures and precise languages for term algebras. The polynomial ring $\rm\:R[x]\:$ is nowadays just a special case of much more general constructions of free algebras. Such equationally axiomatized algebras and their genesis via so-called 'universal mapping properties' are topics discussed at length in any course on Universal Algebra - e.g. see Bergman [3] for a particularly lucid presentation.

[1] William Rowan Hamilton. Theory of conjugate functions, or algebraic couples; with a preliminary and elementary essay on algebra as the science of pure time
Trans. Royal Irish Academy, v.17, part 1 (1837), pp. 293-422.)
http://www.maths.tcd.ie/pub/HistMath/People/Hamilton/PureTime/PureTime.pdf

[2] Akihiro Kanamori. The Empty Set, the Singleton, and the Ordered Pair
The Bulletin of Symbolic Logic, Vol. 9, No. 3. (Sep., 2003), pp. 273-298.
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.95.9839
PS http://www.math.ucla.edu/~asl/bsl/0903/0903-001.ps
PDF http://ifile.it/b20c48j

[3] George M. Bergman. An Invitation to General Algebra and Universal Constructions.
PS http://math.berkeley.edu/~gbergman/245/
PDF http://ifile.it/yquj5w1


No number does "really exist" the way trees or atoms exist. In physics people however have found use for complex numbers just as they have found use for real numbers.


In my opinion, the most natural way to view complex number is as a class of maps from the plane to itself. Specifically, lets define $(R, \theta)$ to be the map which multiplies every point in the plane by the number $R$, and then rotates it by the angle $\theta$. We may call these maps "dilations with rotations."

Such maps can be added and composed (multiplied) in the obvious way, and its not hard to work out that the sum and product of two such mappings is another dilation with rotation.

We can also identify the real number $x$ with the map $(x,0)$, i.e. the map which multiplies every point in the plane by $x$. Then we see that these maps have the magical property that $-1$ has a square root! Namely, if $P$ is the mapping $(1,\pi/2)$ (i.e. rotate every point by angle $\pi/2$), then applying $P$ twice is the same as multiplying every number by $-1$, i.e. $P^2=-1$!

As should be obvious by now, these maps are just complex numbers in disguise.

Unsurprisingly, they are singularly useful for solving polynomial equations. Indeed, the real number $x'$ is a root of the polynomial equations $a_0 x^n + a_1 x^{n-1} + \cdots + a_n =0$ if and only if the mapping $(x',0)$ satisfies the same equation. So viewing polynomial equations over the set of these mappings loses no solutions, while at the same time giving us additional freedom to do operations such as taking the square roots of negative numbers.