Justifying the "Physicist's method" for ODEs using differential forms
Solution 1:
The equality in $dy = f\, dx$ is very misleading, because strictly speaking it's not true. To see why, note that $dy$ is a differential $1$-form defined on $\Bbb{R}^2$, which means for each $p \in \Bbb{R}^2$, $dy_p : T_p \Bbb{R}^2 \to \Bbb{R}$ is linear. Similarly, $dx$ is also a differential $1$-form on $\Bbb{R}^2$. Let's for the sake of concreteness say $f: \Bbb{R}^2 \to \Bbb{R}$ is defined on all of $\Bbb{R}^2$, so that $f \, dx$ is still a $1$-form on $\Bbb{R}^2$.
So, if we just write $dy = f \, dx$, this means that the $1$-form on the LHS must equal the $1$-form on the RHS. But this is just not the case, because it amounts to saying that $dy$ and $dx$ are linearly-dependent over the module $C^{\infty}(\Bbb{R}^2)$. Just to really drive this point home, let's fix a point $p \in \Bbb{R}^2$, then, if that equality were true, it would mean $dy_p = f(p)\, dx_p$, where the equality is as elements in $T_p^*(\Bbb{R}^2)$ (the dual of the tangent space; i.e the cotangent space). But this is of course absurd, because if you evaluate both sides on the tangent vector $\dfrac{\partial}{\partial y}\bigg|_{p} \in T_p\Bbb{R}^2$, you'll get the absurd equality $1 = 0$. Yet again, the statement $dy = f \, dx$ is kind of like saying the row vector $(0 , 1)$ equals $\lambda \cdot (1,0)$ for some $\lambda \in \Bbb{R}$... which is plain wrong.
Now that I've hopefully convinced you that the equation taken literally is false, how do we interpret it? Well, the last sentence of your question gives a clue it says
"... by showing that both differential forms agree in every point c(t) on the tangent space."
But the tangent space of what? $\Bbb{R}^2$? Clearly not, as I've just shown above. What is actually meant is that these two differential forms agree at every point $c(t) \in \Bbb{R}^2$, when restricted to the (one-dimensional) subspace $T_{c(t)}\left(\text{image}(c) \right) \subset T_{c(t)} \Bbb{R}^2$. But what is the tangent space to the image of $c$? It shouldn't be too hard to convince yourself that if you write $c(t) = (t, c_2(t))$ then the tangent space to the image equals the linear span of the (non-zero) vector \begin{align} \xi_{c(t)} :=\dfrac{\partial}{\partial x}\bigg|_{c(t)} + c_2'(t) \dfrac{\partial}{\partial y}\bigg|_{c(t)} \in T_{c(t)} \Bbb{R}^2 \end{align} (i.e $c(t) = (t, c_2(t))$ implies $c'(t) = (1, c_2'(t))$ so the tangent space is just the span of this vector).
So, we have to show that for all $t \in [a,b]$ and for all $\zeta_{c(t)} \in T_{c(t)} \left( \text{image}(c)\right)$, \begin{align} dy_{c(t)}(\zeta_{c(t)}) &= f(c(t)) \cdot dx_{c(t)}(\zeta_{c(t)}) \end{align} But notice that since the tangent space to the image is one-dimensional, it suffices to verify equality when evaluated on the basis vector $\xi_{c(t)}$ defined above; i.e it's enough to prove \begin{align} dy_{c(t)}(\xi_{c(t)}) &= f(c(t)) \cdot dx_{c(t)}(\xi_{c(t)}). \end{align} This is straight forward: \begin{align} dy_{c(t)}(\xi_{c(t)}) &= dy_{c(t)}\left( \dfrac{\partial}{\partial x}\bigg|_{c(t)} + c_2'(t) \dfrac{\partial}{\partial y}\bigg|_{c(t)}\right) \\ &= c_2'(t) \\ &= f(c(t)) \tag{$c$ solves the ODE} \\ &= f(c(t)) \cdot 1 \\ &= f(c(t))\cdot dx_{c(t)}(\xi_{c(t)}). \end{align}
So, this completes the proof.
Note that another way of stating the equality is that $c^*(dy) = c^*(f \, dx)$; i.e when you pull-back the two $1$-forms on $\Bbb{R}^2$ via the curve $c$, you get two $1$-forms , but now defined on $[a,b]$; and it is these forms which are equal.
Solution 2:
I think I may be the 'physicist' in question, but I'll give it a go.
$dy$ is a one-form on $\mathbb{R}^2$, so: $dy_p: T_p \mathbb{R}^2\to \mathbb{R}$, i.e. it takes 1-simplexes defined in space isomorphic to $T_p \mathbb{R}^2$ and gives real values (via integration). In what is to follow $p\in\mathcal{D}$ is a fixed point and $\mathcal{D}\subseteq\mathbb{R}^2$ is the subspace where the solution of $y'=f$ is defined. Same defnitions apply to $\left(fdx\right)_p$
Next, since linear functionals can be linearly combined, we can define $dq_p=\left(dy-fdx\right)_p$. We want to prove that this new functional is zero.
Now, consider integral along the sufficiently short line-segment, 1-simplex, $\sigma_p=(p_0,p_1)$, that contains $p$. Let $\phi_p:[0,1]\to\mathcal{D}$ be the push-forward such that (with slight abuse of notation) $\phi_p\left([0,1]\right)=\sigma_p$. Then:
$\int_{\sigma_p} dq_p=\int_{\phi_p[0,1]} dq_p=\int_0^1\, \phi_p^*dq_p=\int_0^1 \left(\frac{d\bar{y}}{ds}-f\left(\phi\left(s\right)\right)\frac{d\bar{x}}{ds}\right)ds $
where $\bar{y}\left(s\right)=y\left(\phi\left(s\right)\right)$ and the same for $\bar{x}$. The good thing is that now we are no longer dealing with forms, so the simple chain rule applies:
$f\left(\phi\left(s\right)\right)\frac{d\bar{x}}{ds}=\frac{dy}{dx}\bigg|_{\phi\left(s\right)}\frac{d}{ds}x\left(\phi\left(s\right)\right)=\frac{d}{ds}\bar{y}$
so:
$\int_{\sigma_p} dq_p=0$
Thus we have a functional that gives zero for any 'vector' (simplex) we apply it to (via integration). It must be that $dq_p=0$ (how else would you define a zero-functional), which proves what you wanted.