Given an exact differential; $df=yz\,dx+xz\,dy+(xy+a)\,dz$: Why must we integrate each term independently to find the parent function $f\,$?

In other words; Why can't I integrate the whole equation in one go like this?

$$\begin{align}f=\int df&=\int yz\,dx +\int xz\,dy+\int xy\,dz+\int a\,dz\\&=xyz+xyz+xyz+az+C\\&=3xyz + az +C\end{align}$$

This is strangely remarkably close to the correct answer, which is

$$f=xyz+az+C$$


I know that the differential $$df=yz\,dx+xz\,dy+(xy+a)\,dz\tag{a}$$ can be written as $$df=\frac{\partial f}{\partial x}\,dx+\frac{\partial f}{\partial y}\,dy+\frac{\partial f}{\partial z}\,dz\tag{b}$$

Matching equations $(\mathrm{a})$ and $(\mathrm{b})$ leads to $3$ more equations, namely: $$\frac{\partial f}{\partial x}=yz\tag{1}$$ $$\frac{\partial f}{\partial y}=xz\tag{2}$$ $$\frac{\partial f}{\partial z}=xy+a\tag{3}$$

Now integrating $(1)$, $(2)$, and $(3)$ with respect to their partial derivatives

$$f=xyz + P\quad\text{from} \quad(1)\tag{A}$$ $$f=xyz + Q\quad\text{from} \quad(2)\tag{B}$$ $$f=xyz + az+R\quad\text{from} \quad(3)\tag{C}$$ where $\mathrm{P}$, $\mathrm{Q}$ and $\mathrm{R}$ are constants of integration.

Now 'somehow' we decide that equation $(\mathrm{C})$ best describes the parent function $f$ and is therefore the function we desire: $$f=xyz + az+C$$ with $R$ replacing $C$, since they are both constants.


So apart from the obvious "because it gives the correct answer", my question is as follows: Why do we have to integrate each term separately (independently) instead of the method I used at the beginning of this question (integrating the whole equation in one go)?

Also; What is the precise logic behind choosing $(\mathrm{C})$ to represent $f$ instead of $(\mathrm{A})$ or $(\mathrm{B})$?

Many thanks.


Solution 1:

As SchrodingersCat explains in his answer, the reason your method fails is that it doesn’t take into account the interactions that the component functions might have with each other. You also make a key oversight in that the constants of integration $P$, $Q$ and $R$ are functions. This means that you don’t simply integrate the individual components to find $f$. Instead, you end up alternating integration and differentiation.

Taking your example, we know that ${\partial f\over\partial x}(x,y,z)=yz$, ${\partial f\over\partial y}(x,y,z)=xz$ and ${\partial f\over\partial z}(x,y,z)=xy+a$. Integrating the first with respect to $x$ gives $f(x,y,z)=xyz+P(y,z)$: the “constant” of integration isn’t simply a constant scalar—it’s some function that doesn’t depend on $x$ so that ${\partial P\over\partial x}=0$. Differentiating this with respect to $y$ gives $xz+{\partial P\over\partial y}(y,z)$. Comparing this to the known ${\partial f\over\partial y}$ shows that ${\partial P\over\partial y}(y,z)=0$, that is, that $P$ doesn’t depend on $y$, either. So we now have $f(x,y,z)=xyz+R(z)$. Differentiating that with respect to $z$ gives $xy+{dR\over dz}=xy+a$, so ${dR\over dz}=a$ and we can integrate that to find that $R(z)=az+C$. Putting this all together, we have $f(x,y,z)=xyz+az+C$.

This process can get quite tedious as the number of variables increases. You’ve got the germ of a good idea here, however: It is possible to find an antiderivative of $df$ “in one go,” but you have to use the right integral.

Let $\omega$ be a closed differential form defined on a star-shaped region centered on the origin. Define $f(P)=\int_{\Gamma_P}\omega$, where $\Gamma_P$ is a differentiable path joining the origin to the point $P$. Since $\omega$ is closed, this integral’s value depends only on $P$ and not on the choice of path. It’s not hard to show that $df=\omega$, i.e., that this path integral is an antiderivative of $\omega$.

For convenience we can integrate along the line segment joining $P$ and the origin using the obvious parameterization. In $\mathbb R^3$, we can write $\omega =P(x,y,z)\,dx+Q(x,y,z)\,dy+R(x,y,z),dz$. Using the parameterization $\alpha:t\mapsto(tx,ty,tz)$, gives us $$f(x,y,z)=\int_\Gamma\omega=\int_0^1xP(tx,ty,yz)+yQ(tx,ty,tz)+zR(tx,ty,tz)\,dt$$ (this is just the pullback by $\alpha$). This suggests a procedure for computing $f$: for each variable $x_k$, make the substitutions $x_k\to tx_k$ and $dx_k\to x_k\,dt$ in $\omega$ and then integrate with respect to $t$ from $0$ to $1$. Taking your example again, we have $\omega=yz\,dx+xz\,dy+(xy+a)\,dz$. Making the above substitutions produces $t^2xyz\,dt + t^2xyz\,dt + z(t^2xy+a)\,dt = (3t^2xyz+az)\,dt$ and integrating this yields $xyz+az(+C)$.

This method can be extended to compute antiderivatives of closed $k$-forms on star-shaped regions. Obviously, with a few minor changes you can have the region centered somewhere other than the origin, and with a bit more work it can be adapted to regions that can be mapped smoothly to star-shaped regions.

Solution 2:

Observe that $P,Q,R$ are not mere constants but functions themselves as well, i.e. $P=P(y,z)$ and $Q=Q(x,z)$ and $R=R(x,y)$. This is because $x,y,z$ are not at all independent but are functionally dependent by some relation. So your method of integrating all together does not work. Note that when you are integrating by your method, you are considering $2$ variables independent of the integrating variable.

And if you use the 3rd equation, it most aptly describes the function from among the other options. And it can be realized if you write $P,Q,R$ as $P=P(y,z)$ and $Q=Q(x,z)$ and $R=R(x,y)$.