How to find the error in a proof? (that $1=0$)

So I devised this proof that $1=0$. Of course it is false, but I don't know why. Why?

$$\begin{align*} x+1&=y\\ \frac{x+1}{y}&=1\\ \frac{x+1}{y}-1&=0\\ \frac{x+1}{y}-\frac{y}{y}&=0\\ \frac{x-y+1}{y}&=0\\ x-y+1&=0\\ x-y+1&=\frac{x-y+1}{y}\\ y(x-y+1)&=x-y+1\\ y&=1\\ x+1&=1\\ x&=0\qquad * * * *\\ y-1&=x\\ \frac{y-1}{x}&=1\\ \frac{y-1}{x}-1&=0\\ \frac{y-1}{x}-\frac{x}{x}&=0\\ \frac{y-x-1}{x}&=0\\ y-x-1&=0\\ y-x-1&=\frac{y-x-1}{x}\\ x(y-x-1)&=y-x-1\\ x&=1\qquad * * * *\\ 1&=0\\ \end{align*}$$


When debugging proofs on abstract objects, the error may become simpler to locate after specializing to more concrete objects. Your proof begins with the equation $\rm\:y = x\!+\!1.\:$ So you are working with a general point $\rm\:(x,y)\:$ on the line $\rm\:y = x\!+\!1.\:$ It is easy to find simple special points on the line, e.g. the integer points $\rm\:(x,y) = (n,n\!+\!1).\:$ In particular, it is easy to choose such special points that do not satisfy your inference that $\rm\:y = 1,\:$ e.g. the point $\rm\:(x,y) = (1,2).\:$ Now substitute these values into you proof, and find the first place where it yields an $\rm\color{#c00}{incorrect\ equality}$ between integers. Then the inference yielding that incorrect equation must be invalid. Let's do that, successively evaluating all equations in the proof at $\rm\,(x,y) = (1,2).\,$ Omitting some steps we get $$\begin{align*} \rm x+1 &\rm = y & 1+1 &= 2 & 2 = 2\,\ \color{#0a0}\checkmark\\ \rm x-y+1&=0 & 1-2+1 &= 0 & 0 = 0\,\ \color{#0a0}\checkmark\\ \rm y\:\!(x-y+1)&\rm =x-y+1 & 2\,(1-2+1) &= 1-2+1 & 0 = 0\,\ \color{#0a0}\checkmark\\ \rm y&=1 & \color{#c00}2\ & \color{#c00}{= 1} & \color{#c00}{ 2 = 1}\phantom{\,\ \color{#0a0}\checkmark} \end{align*}\qquad\qquad$$

Thus the final inference is invalid. Indeed, it was erroneously derived by dividing by (or cancelling) the expression $\rm\:x-y+1\ = 0.\:$ Note how this method allowed us to quickly pinpoint the location of the error using only knowledge of simpler objects (arithmetic of integers versus polynomials). For some similar examples see here.

Analogous methods prove helpful generally: when studying abstract objects and something is not clear, look at concrete specializations to gain further insight on the general case. It is only by such back-and-forth journeys between the abstract and the concrete that we can ever hope to develop intuition on such abstract objects. Once we do, the abstract objects become more concrete, more intuitive. Then, with such mastery, we can understand these objects better by considering further abstractions, taking one step higher on the ladder (web) of abstraction.

For example, consider various abstractions of the notions of "number", integers, rationals, algebraics, (hyper) reals, (hyper) complexes, quaternions, octonions, surreals, polynomials, etc, all of which are abstracted in the algebraic structure known as a ring. When studying general rings, it proves quite helpful to construct a catalog of concrete prototypical (counter)examples exhibiting various properties, to help one develop better intuition of the abstract case from experience with these prototypical examples.


You have a problem already by going from $$x-y+1=0= y(x-y+1)$$ to $$y=1$$ because you are dividing by zero. The fact that you are deducing $y=1$ from a starting place, namely $$x+1=y$$ that does not assume anything about $x$ or $y$, should have been a signal that something was wrong.


Other answerers have already pinpointed the flaw in the argument, which is division by zero. However, your comments indicate you are still not satisfied. Let me offer some thoughts on some further issues in the hopes of addressing what you are dissatisfied about.

(1) What's wrong with division by zero anyway?

As Zev Chonoles and Bill Dubuque have both pointed out, the first mistake is in going from

$$x-y+1 = (x-y+1)y$$

to

$$1=y$$

since $x-y+1$ is zero, because of the starting assumption that $y=x+1$. Let me add some perspective about just why this deduction is not valid. I will avoid the idea that "you can't divide by zero" or any equivalent, in an attempt to clarify the root issue. The basic question is this:

If $A\cdot b = A\cdot c$, does it follow that $b=c$? In your case, $A$ is $x-y+1=0$, $b$ is $1$, and $c$ is $y$, but the issue at stake is broader. If you know that two numbers $b$ and $c$ end up being the same after multiplication by some number $A$, do you know that they were the same to begin with?

I encourage you to give this question some thought for a moment before reading on.

Here's the conclusion you'll come to:

Most of the time, the answer is yes. For most choices of $A$, for example $3$, two numbers that are the same after multiplication by $A$ must have been the same to begin with. To put it another way, if two numbers are different, then after multiplication by $A$ they will still be different. To make this concrete, take $A=3$. If $b$ is not $c$, then 3 times $b$ isn't 3 times $c$ either. A technical way to express this is to say that multiplication by $3$ is an injective function.

However, there is one case where the answer is no: $A=0$. Two different numbers can be made the same by multiplication by $0$. Multiplication by zero is not injective. (In fact, all numbers are made equal after multiplication by zero: multiplication by zero is extremely not injective.) So, from $0\cdot b = 0\cdot c$ it is not safe to conclude that $b=c$. $b$ and $c$ could be different and they would still end up the same after multiplication by zero, so the fact that they ended up the same doesn't tell you that they started the same.

This discussion is the logical back-end behind the algebraic move of "canceling a factor" i.e. going from an equation with the form $Ab=Ac$ to $b=c$. As long as $A$ is different from zero, multiplication by $A$ is injective (i.e. different before multiplication by $A$ implies different after), and therefore this is valid. But if $A$ is zero, many values are collapsed together by multiplication by $A$, so it's not valid. Thus the algebraic move from $Ab=Ac$ to $b=c$ requires the assumption that $A\neq 0$ in order to be valid.

To specialize to your case, you have $(x-y+1)\cdot 1 = (x-y+1)\cdot y$; in the language I've been using, "$1$ and $y$ end up the same after multiplication by $x-y+1$." But you can't conclude that $1$ and $y$ were originally the same, because $x-y+1$ is zero, so multiplication by it is not injective.

(2) Couldn't you have changed the numbers around so that the factors being canceled are not zero after all, thereby rescuing the proof?

I think it would be a productive exercise to actually attempt to rewrite the proof to avoid canceling zeros.

Here's what one attempt might look like, based on the comment discussion on Zev's answer.

Since we are trying to avoid having $x-y+1$ be zero, perhaps we should start with $x+2=y$ instead. Let's run through the proof starting with that:

$$ x+2 = y$$

$$ \frac{x+2}{y}=1$$

$$\frac{x+2}{y}-1 = 0$$

$$\frac{x+2}{y}-\frac{y}{y}=0$$

$$\frac{x-y+2}{y}=0$$

$$x-y+2=0$$

$$x-y+2=\frac{x-y+2}{y}$$

$$y(x-y+2) = x-y+2$$

... hmmm. To conlcude from here that $y=1$, we would have to know (a propos of the above) that $x-y+2$ is not zero. But actually $x-y+2$ is definitely zero, as the proof itself showed 2 lines above. So the attempt to rescue the proof by tweaking the numbers was unsuccessful: changing the numbers also changed the troublesome factor in such a way that it was still zero. (This is what Zev was getting at in his comment about $x-y+a$.)

Now because I know the conclusion $0=1$ is false, and I see that the flaws in the proof all involve cancellations of factors equal to zero, I believe that in any attempt to tweak the proof, factors that have to be canceled will still end up being zero. But you may get something out of actually trying to make it work and seeing what happens.


Here is the problem. You claim x=0. Then you say $\frac{y-1}{x}=1$. Can't divive by 0.