It is possible to apply a derivative to both sides of a given equation and maintain the equivalence of both sides? [duplicate]

This only applies if your equation is an identity, meaning it's true for all $x$.

This is why it makes sense to differentiate a Taylor series to find one for the derivative of a function.

However, it does not hold true for an equation that only holds true for some $x$. Example, you cannot differentiate $x^2 = 4$ to come up with $2x = 0$ and expect the solution to the latter to be the solution to the former.


You have shown that starting from that equation (which holds where? in some interval I guess), you get an expression for $t_1.$ That looks absurd, doesn't it? We have the constant $t_1$ equal to a function of $x?$ Dubious to say the least.

What happened is that you assumed the equation holds in some interval. It doesn't. If you assume that a false assertion is true, then you can derive any weird conclusion you like. (I heard this story a while back: Bertrand Russell once told his philosophy students that assuming $0=1,$ you could prove anything. A student after class challenged him: "OK, assuming $0=1,$ prove that you are god." Russell: "OK, it follows that $1=2.$ Since god and I are two, it follows that we are one, therefore I am god." I hope that story is true.)


Generally, when stating an equation with symbols such as $x$ or $r$, you need to be very specific about what you actually mean by this equation. For example, when we write $$ \sec^2 x = 1 + \tan^2 x \qquad \text{for }-\frac\pi2 < x < \frac\pi2 $$ we mean that the expression on either side of the equation is a function of $x$ over the entire open interval $\left(-\frac\pi2, \frac\pi2\right)$. In that case it is completely appropriate to differentiate both sides, because the equation asserts that the left and right sides are the exact same function of $x$ over some interval of the real number line, so we can take the derivative of this function with respect to $x$ (provided we are inside that interval and provided the derivative exists). Since it's the same function on either side of the equation, all you get when you differentiate both sides is the derivative of that function, which of course is equal to itself even if it happens to be written a little differently in one place than in the other.

On the other hand, if we assert that $r$ and $k$ are known constants and $x$ is unknown in $$ e^{rx} = kx^2 - 1, $$ we now have an equation that can be solved for $x$. This particular equation is true for either one or two distinct values of $x$ (depending on the values of $k$ and $r$); other equations may have three solutions, $17$ solutions, or no solutions. In the introduction of this equation, however, there was never any implication that $e^{rx}$ and $kx^2 - 1$ are the same function of $x$ over any open interval. Since they are not the same function of $x$, there is no reason to think that their derivatives will be equal, so taking the derivatives of both sides and setting them equal is not a legitimate thing to do.

In your question, you assert the equation $$2^{(x+1)^2}=k+t_1(x^2+r),$$ then propose to "find $t_1$ in terms of $x$". It is not 100% clear what the equation was supposed to mean, even after clarifications in several comments, but it seems likely that the equation was supposed to represent the idea that $k$, $t_1$, and $r$ are known constants and that $x$ is an unknown value for which you wish to solve. In that interpretation, you have something much like the example $e^{rx} = kx^2 - 1$ above, that is, you do not have the same function on two sides of an equation, and there is no justification for setting the derivatives of both sides equal.

But let's suppose for a moment that you did actually mean that the things on each side of the equation are the same function. This clearly cannot be true if $k$, $r$, and $t_1$ are all constants, but let's suppose that only $k$ and $r$ are given as constants. Then it does make sense to say, let's solve for $t_1$ in terms of $x$, provided that you mean that $t_1$ is a function of $x$ such that the entire right-hand side of the equation actually is the same function of $x$ as the left-hand side of the equation.

But if you make that interpretation, then the assertion $$ \frac{{\rm d} }{{\rm d}x}( k+t_1(x^2+r)) = t_1 \cdot 2x $$ is simply wrong, because your interpretation of the problem was that $t_1$ is a function of $x$, and therefore you cannot treat $t_1$ as a constant when differentiating $t_1(x^2+r)$. Instead, you need to apply the product rule. The fact that going down your path ends up setting $t_1$ equal to some non-constant function of $x$ indicates that it was not OK to ignore the derivative of $t_1$ itself with respect to $x$.

Of course, there is a much simpler way to solve for $t_1$ in terms of $x$, assuming $t_1$ is a function of $x$: just manipulate the original equation (without taking derivatives) to isolate $t_1$. Using standard techniques of high-school algebra to eliminate $k$ and then $x^2+r$ from the right-hand side of the equation, we get $$ \frac{2^{(x+1)^2} - k}{x^2+r}=t_1, $$ and that's $t_1$ expressed as a function of $x$ for any constants $k$ and $r$.


EDIT: As pointed out in the comments, no solution would work for all x, since the first one grows exponentially, and the second one is a polynomial. This method, however, would give a solution at a given point x.

That substitution is correct, but the argument not fully complete. What you have found at this point is two functions $f$ and $g$ so that:

($f-g$)' = 0

At this point, you can integrade the result, and get that $f-g=c$, where $c\in\Bbb N$ is a constant. Since your equation allows for choosing that constant (i.e. choosing $c=k+t_1r$), then any choice of $k$ and $r$ that would satisfy $c=k+t_1r$, would be a legitimate solution.

Also: note that the reason the other so-called counterexamples exist, in the comments and the other post, is that those counterexamples do not allow you to choose the constant $c$. If they did, then they too would give a solution to the problem.


What is a derivative? Let $f$ be defined on an open $I \subset \mathbb{R}$ and fix $c \in I$. $f$ is differentiable at $c$ if and only if there exists a real number denoted $f'(c)$ such that $f(x) = f(c) + (x-a)f'(c) + o(x-a)$. It's easy to see, therefore, that the derivative is a "local" property. It matters not only on the value of $f$ at $c$, but at a neighbourhood of $c$.

What does this have to do with your question, you ask?

Suppose $f$ and $g$ are differentiable. If $f(a) = g(a)$ for some $a \in I$, then $f'(a) = g'(a)$ needn't be true for any $a \in I$. For example, if you have $x + 1 = 2$, this has a solution $x=1$, but the derivatives of $g(x) := 2$ and $f(x) := x+1$ never coincide.

On the other hand, if $f(a) = g(a)$ for some $a \in I$ and for each $a$ satisfying $f(a) = g(a)$ there is a $\delta$ such that $f(x) = g(x)$ for all $x \in (a - \delta, a + \delta)$, then we can definitely say $f'(a) = g'(a)$ for all such $a$.

Of course, if $f$ and $g$ are differentiable and equal everywhere then the above holds clearly and hence $f' = g'$ for all $x \in \mathbb{R}$.