If you're comfortable with differential forms, here's the story as best I understand it.

The essential point is that a function $f$ of a single variable can be viewed as defining a path $\gamma(t) = (t,f(t))$ in the plane $\mathbb{R}^2$, so that if you can nicely rewrite a differential equation for $f$ in terms of $\gamma$, you can now use tools from differential geometry to get a better handle on your original equation. Indeed, consider the general first order linear differential equation for an unknown function $f$: $$ a(t)f^\prime(t) + b(t)f(t) + c(t) = 0. $$ If you define the path $\gamma$ in $\mathbb{R}^2$ by $\gamma(t) = (x(t),y(t)) := (t,f(t))$, then you can rewrite your differential equation as $$ \gamma^\ast\omega = 0 $$ for the $1$-form $$ \omega := (b(x)y+c(x))dx+a(x)dy, $$ since $$ \gamma^\ast\omega = (b(x(t))y(t)+c(x(t)))x^\prime(t)dt + a(x(t))y^\prime(t)dt = (a(t)f^\prime(t) + b(t)f(t) + c(t))dt. $$

Now, if $\omega$ is already exact, i.e., $\omega = dF$ for some scalar function $F$, then $$ 0 = \gamma^\ast \omega = \gamma^\ast dF = d(F \circ \gamma), $$ so that $F \circ \gamma$ is constant, and hence $F(t,f(t)) = C$ gives an implicit solution to the original equation. However, for $\omega$ to be exact, it must be closed, i.e., $d\omega = 0$, which in this concrete case yields $$ 0 = d\omega = (a^\prime(x)-b(x)) dx \wedge dy, $$ or equivalently, $$ a^\prime(x) = b(x). $$

So, what do you do if $\omega$ isn't even closed, i.e., if $a^\prime(x) \neq b(x)$? Well, you can try to find a nowhere-vanishing function $\mu$, an integrating factor, such that $\mu\omega$ is closed, i.e., $$ 0 = d(\mu \omega ) = d\mu \wedge \omega - \mu d\omega. $$ If you can find such a function $\mu$, then $\gamma^\ast \omega = 0$ if and only if $\gamma^\ast (\mu \omega) = 0$ where $\mu \omega$ is closed. If moreover, $\mu \omega$ is defined on a region in the plane over which every closed $1$-form is exact (e.g., your region is contractible), then $\mu \omega$ is exact, i.e., $\mu \omega = dG$ for some scalar function $G$, and $G(t,f(t))=C$ implicitly defines the solutions of your equation.

Now, in our concrete case, $$ 0 = d\mu \wedge \omega - \mu d\omega\\ = (\mu_x(x,y) dx + \mu_y(x,y) dy) \wedge ((b(x)y+c(x))dx+a(x)dy) - \mu(x,y)(a^\prime(x)-b(x))dx \wedge dy\\ = (a(x)\mu_x(x,y) - (b(x)y+c(x))\mu_y(x,y) - (a^\prime(x)-b(x))\mu(x,y))dx \wedge dy, $$ or equivalently, $$ a(x)\mu_x(x,y) - (b(x)y+c(x))\mu_y(x,y) - (a^\prime(x)-b(x))\mu(x,y) = 0. $$ In the special case where $a(x) = 1$, i.e., your original ODE was $f^\prime(t) + b(t)f(t) + c(t) = 0$, if you assume that $\mu = \mu(x)$, so that $\mu_y = 0$, then this reduces to $$ \mu^\prime(x) - b(x)\mu(x) = 0, $$ yielding the usual integrating factor $$ \mu(x) = \mu(x_0)e^{\int_{x_0}^x b(s)ds} $$ from the textbooks.

Finally, let me just point out that this story can be told in considerable generality. Let $M$ be a smooth manifold (e.g., $M$ an open subset of $\mathbb{R}^2$, as in the usual ODE textbooks), let $\omega \in \Omega^1(M)$ be a $1$-form, and consider the first order differential equation $$ \gamma^\ast \omega = 0 $$ for an unknown smooth path $\gamma : (a,b) \to M$. If $\omega$ is already exact, i.e., $\omega = dF$ for some $F \in C^\infty(M)$, then $$ 0 = \gamma^\ast \omega = \gamma^\ast dF =d(F \circ \gamma), $$ so that $F \circ \gamma$ is constant, and hence $F(\gamma(t)) = C$ gives an implicit equation for $\gamma$. For $\omega$ to be exact, it must be closed, i.e., $d\omega = 0$. If it isn't even closed, you can try to find a nowhere vanishing scalar function $\mu \in C^\infty(M)$, your integrating factor for $\omega$, such that $\mu\omega$ is closed, i.e., $$ 0 = d(\mu \omega) = d\mu \wedge \omega - \mu d\omega. $$ Then, if $H^1(M,\mathbb{R}) = 0$, so that every closed $1$-form is exact, you can find a function $G \in C^\infty(M)$ such that $\mu\omega = dG$, in which case $G(\gamma(t)) = C$ defines an implicit equation for $\gamma$. In particular, finding $\mu$ involves solving the potentially very pesky PDE $$ d\mu \wedge \omega - \mu d\omega = 0, $$ and all the "voodoo" consists of various special cases (traditionally for $M$ an open subset of $\mathbb{R}^2$) where this PDE is actually tractable.


A first-order ODE (in the plane) may be viewed as an equation of the form $$ M\, dx + N\, dy = 0, $$ with $M$ and $N$ continuously-differentiable functions defined in some region $U$ of the plane. A solution of this equation is a curve $\bigl(x(t), y(t)\bigr)$ in $U$ such that $M x'(t) + N y'(t) = 0$ for all $t$.

One is "naturally led to ask" whether the equation can be written in the form $df = 0$ for some twice continuously-differentiable function $f$; if so, the ODE is said to be exact, and the solutions are (parametrizations of) level curves of $f$.

Exactness amounts to existence of an $f$ such that $M = f_{x}$ and $N = f_{y}$. By equality of mixed partial derivatives, the integrability condition $$ \frac{\partial M}{\partial y} = f_{xy} = f_{yx} = \frac{\partial N}{\partial x} $$ is necessary.

For general functions $M$ and $N$, the preceding condition is not satisfied. Undaunted, one is naturally led to look for an integrating factor, namely a function $\phi$ for which the equivalent ODE $$ (\phi M)\, dx + (\phi N)\, dy = 0 $$ is exact.

For example, $e^{\int a(x)\, dx}$ is an integrating factor for the linear equation $$ \frac{dy}{dx} + ay = b,\quad\text{or}\quad (ay - b)\, dx + dy = 0. $$

If memory serves, Differential Equations and Their Applications by Martin Braun contains a clear, detailed exposition of integrating factors in general.