Physicists, not mathematicians, can multiply both sides with $dx$ - why?
Solution 1:
Suppose you have the equation $$f(x) = g(x).$$ You could "multiply both sides by $dx$" $$f(x) \, dx = g(x) \, dx$$ and then integrate over some interval $$\int_a^b f(x) \, dx = \int_a^b g(x) \, dx.$$ However, unless you have learned about differential forms, the second equation above is meaningless. This can easily be avoided by simply integrating both sides of $f(x) = g(x)$ with respect to $x$ just as you learned in calculus.
Now consider the differential equation $$f(y) \frac{dy}{dx} = g(x).$$ Here, many people will "multiply by $dx$" to get $$f(y) \, dy = g(x) \, dx$$ and then integrate both sides. Again, without the machinery of differential forms, this statement is meaningless. Here we can avoid this by integrating the original equation with respect to $x$: $$\int f(y) \frac{dy}{dx} \, dx = \int g(x) \, dx$$ or to use different notation, $$\int f(y(x))y'(x)\, dx = \int g(x) \, dx.$$ Then as long as certain hypotheses are satisfied, the change-of-variables (substitution) theorem says that this is just $$\int f(y) \, dy = \int g(x) \, dx$$ so we are back to where the first method lead us.
The point of this is that manipulations involving "multiplying by $dx$" are usually shorthand for some more "rigorous" method. However, if you are aware of (and comfortable with) the mathematics going on behind the scenes, then there is no loss of rigor. "Real mathematicians" don't frown upon using these kinds of manipulations themselves, they just frown upon students using them without knowing why they work.
Solution 2:
Because it is considered as non rigorous. Calculating with infinitesimals is something physicists and mathematicians have done since at least the time of Fermat, even long before that. But it always felt awkward to work with infinitesimals as they seemed to lead to contradictory statements if not manipulated carefully.
In the 19th century, under the impulse of Cauchy, calculus started to be formalized and the calculus of infinitesimals was replaced by the more precise and rigorous $\epsilon-\delta$ methodology. Physicists didn't take much notice however since calculating with differentials still worked and was more practical.
In the 1960's however, Abraham Robinson developed a rigorous approach to manipulating infinitesimals now called non-standard analysis. At the time it was controversial, nowadays, I think it is more accepted. That doesn't mean physicists really care that much, but in a way, Robinson vindicated their approach. Although you still need some work to make things rigorous.