When not to treat dy/dx as a fraction in single-variable calculus?

While I do know that $\frac{dy}{dx}$ isn't a fraction and shouldn't be treated as such, in many situations, doing things like multiplying both sides by $dx$ and integrating, cancelling terms, doing things like $\frac{dy}{dx} = \frac{1}{\frac{dx}{dy}}$ works out just fine.

So I wanted to know: Are there any particular cases (in single-variable calculus) we have to look out for, where treating $\frac{dy}{dx}$ as a fraction gives incorrect answers, in particular, at an introductory level?

Note: Please provide specific instances and examples where treating $\frac{dy}{dx}$ as a fraction fails


It is because of the extraordinary power of Leibniz's differential notation, which allows you to treat them as fractions while solving problems. The justification for this mechanical process is apparent from the following general result:

Let $ y=h(x)$ be any solution of the separated differential equation $A(y)\dfrac{dy}{dx} = B(x)$... (i) such that $h'(x)$ is continuous on an open interval $I$, where $B(x)$ and $A(h(x))$ are assumed to be continuous on $I$. If $g$ is any primitive of $A$ (i.e. $g'=A$) on $I$, then $h$ satisfies the equation $g(y)=\int {B(x)dx} + c$...(ii) for some constant $c$. Conversely, if $y$ satisfies (ii) then $y$ is a solution of (i).

Also, it would be advisable to say $\dfrac{dy}{dx}=\dfrac{1}{\dfrac{dx}{dy}}$ only when the function $y(x)$ is invertible.

Say you are asked to find the equation of normal to a curve $y(x)$ at a particular point $(x_1,y_1)$. In general you should write the slope of the equation as $-\dfrac{1}{\dfrac{dy}{dx}}\big|_{(x_1,y_1)}$ instead of simply writing it as $-\dfrac{dx}{dy}\big|_{(x_1,y_1)}$ without checking for the invertibility of the function (which would be redundant here). However, the numerical calculations will remain the same in any case.

EDIT.

The Leibniz notation ensures that no problem will arise if one treats the differentials as fractions because it beautifully works out in single-variable calculus. But explicitly stating them as 'fractions' in any exam/test could cost one the all important marks. One could be criticised in this case to be not formal enough in his/her approach.

Also have a look at this answer which explains the likely pitfalls of the fraction treatment.


In calculus we have this relationship between differentials: $dy = f^{\prime}(x) dx$ which could be written $dy = \frac{dy}{dx} dx$. If you have $\frac{dy}{dx} = \sin x$, then it's legal to multiply both sides by $dx$. On the left you have $\frac{dy}{dx} dx$. When you replace it with $dy$ using the above relationship, it looks just like you've cancelled the $dx$'s. Such a replacement is so much like division we can hardly tell the difference.

However if you have an implicitly defined function $f(x,y) = 0$, the total differential is $f_x \;dx + f_y \;dy = 0$. "Solving" for $\frac{dy}{dx}$ gives $$\frac{dy}{dx} = -\frac{f_x}{f_y} = -\frac{\partial f / \partial x}{\partial f /\partial y}.$$ This is the correct formula for implicit differentiation, which we arrived at by treating $\frac{dy}{dx}$ as a ration, but then look at the last fraction. If you simplify it, it makes the equation $$\frac{dy}{dx} = -\frac{dy}{dx}.$$ That pesky minus sign sneeks in because we reversed the roles of $x$ and $y$ between the two partial derivatives. Maddening.