Is there such a thing as partial integration?
Solution 1:
Your partial integral is roughly the same as your regular integral, with a caveat. If you have, say, $$\int \frac{d}{dx} f(x) dx$$ When you integrate this you end up with $f(x) + C$ - since this is the antiderivative of $f'(x)$, the $C$ shows up because integration only knows 'so much' - the derivative of $C$ is zero, so we don't know whether or not it's actually in $f(x)$. Similarly, when we take an integral over one variable, we get $$\int \frac{\partial}{\partial x} f(x,y) dx$$ The partial 'knocks out' any functions of $y$ in $f(x,y)$; for example, if $f(x,y)=xy+y^2$, then the partial will send $y^2$ to zero. So as before, when we integrate solely with respect to $x$ of a multivariable function, we get $$\int \frac{\partial}{\partial x} f(x,y) dx = g(x,y)+C(y) =f(x,y)$$ Where $C(y)$ denotes any function of $y$. There's no way to get an integral that will 'invert' the partial operator while still knowing about $y$ - that information is lost when we took the partial in the first place.
Solution 2:
Such integration is indeed used for certain purposes, for example, when you are looking for the antiderivative(potential) of the vector field: $\vec F(x,y)=(2xy,x^2)$. Then you need to find a scalar function $V(x,y)$ such that $\frac {\partial V}{\partial x}=2xy$ and $\frac {\partial V}{\partial y}=x^2$. Using indefinite integration, we can find $V=x^2y+C$ for constant $C$.
However, this idea is contained in the usual single variable indefinite integration: we just treat the integration of $f(x,y)$ w.r.t $x$ as the integration of the single variable function $g_y(x):=f(x,y)$ for any fixed $y$. Therefore we don't need to define a partial integration. Although the partial derivative has a definition also in this manner, but that concept is important because of its connection with total derivative.
Solution 3:
Seeing the other answers above, I wanted to make a quick clarification. When you take the derivative or the integral of some function, you do it with respect to a specific variable.
Consider $f: \mathbb{R}^2 \to \mathbb{R}$ with such that $f(x,y) = z$. There are two input variables, so there are two derivatives given by \begin{equation} {f_y}'(x) = \frac{\partial f}{\partial x} \mathrm{~~~and~~~} {f_x}'(y) = \frac{\partial f}{\partial y} \end{equation}
The gradient is, by definition, a vector which yields the direction and rate of the greatest increase. Here, the gradient is given by a two-dimensional vector along both the $x$-axis and the $y$-axis with \begin{equation} \nabla f(x,y) = \begin{bmatrix} \frac{\partial f}{\partial x} \\ \frac{\partial f}{\partial y} \end{bmatrix} \end{equation}
Consider the curve $\mathcal{C}$ in the closed interval $[a,b]$ given by \begin{equation} \mathbf{r}(t) = \begin{bmatrix} x(t) \\ y(t) \end{bmatrix} \mathrm{~~~with~} t \in \mathbb{R} \end{equation}
We then evaluate the line integral along $\mathcal{C}$ with \begin{align} f(\mathbf{r}(b)) - f(\mathbf{r}(a)) & = \int_a^b \left[f(\mathbf{r}(t))\right]' \; dt \\ & = \int_a^b \left[{f_y}'(x(t)) \cdot x'(t) + {f_x}'(y(t)) \cdot y'(t)\right] \; dt \\ & = \int_a^b \left[\frac{\partial f}{\partial x} \cdot \frac{dx}{dt} + \frac{\partial f}{\partial y} \cdot \frac{dy}{dt}\right] \; dt \\ & = \int_a^b \nabla f(\mathbf{r}(t)) \bullet {\mathbf{r}'}(t) \; dt \\ & = \int_\mathcal{C} \nabla f(\mathbf{u}) \bullet \mathbf{du} \end{align}
And with $\mathbf{p} = \mathbf{r}(a)$ and $\mathbf{q} = \mathbf{r}(b)$ the gradient theorem is given by \begin{equation} \int_\mathcal{C} \nabla f(\mathbf{u}) \bullet \mathbf{du} = f(\mathbf{p}) - f(\mathbf{q}) \end{equation}
The total differential of $f(x,y)$ is given by \begin{equation} df = \frac{\partial f}{\partial x} \; dx + \frac{\partial f}{\partial y} \; dy \end{equation}
Note that integrating with respect to multiple variables has no meaning, consider the following \begin{equation} \int df = \int \frac{\partial f}{\partial x} \; dx + \int \frac{\partial f}{\partial y} \; dy \end{equation}
This is clearly invalid. The integral sign has no meaning on its own and must always be coupled with a dummy integration variable.
Instead, you may want to integrate the gradient along a linear path in $\mathbb{R}^2$ as shown in the gradient theorem above.
Consider a linear path $\gamma$ from $(0,0)$ to $(x_0,y_0)$ given by \begin{equation} \gamma(t) = \begin{bmatrix} x(t) \\ y(t) \end{bmatrix} = \begin{bmatrix} tx_0 \\ ty_0 \end{bmatrix} \mathrm{~~~with~} t \in [0,1] \mathrm{~and~} (x_0,y_0) \in \mathbb{R}^2 \end{equation} and clearly we have $f(\gamma(0)) = f(0)$ and $f(\gamma(1)) = f(x_0,y_0)$.
Thus per the gradient theorem above we have \begin{equation} f(x_0,y_0) = \int_{\gamma[0,1]} \nabla f(\mathbf{u}) \bullet \mathbf{du} \end{equation}
The gradient theorem works in this case because we integrate the gradient along both the $x$-axis and the $y$-axis in one straight linear path in $\mathbb{R}^2$ with the parametrization given above.
There is also another method which relies on solving for remaining parts after integrating the partial derivatives.
Let us take a concrete example with $f(x,y) = xy + y$ so that we have \begin{equation} {f_y}'(x) = \frac{\partial f}{\partial x} = y \mathrm{~~~and~~~} {f_x}'(y) = \frac{\partial f}{\partial y} = x + 1 \end{equation}
In fact, we easily see that \begin{align} f(x,y) & \neq \int \frac{\partial f}{\partial x} \; dx + \int \frac{\partial f}{\partial y} \; dy \\ & \neq 2xy + y \end{align}
Because we integrate along both the $x$-axis and the $y$-axis, we obtain twice the result for variables that are bound together.
The correct approach is either to use the gradient theorem as shown above or to integrate with respect to one chosen variable instead, and then solve for the remaining parts.
When integrating with respect to $x$, we have \begin{align} f(x,y) & = \int \frac{\partial f}{\partial x} \; dx + \psi(y) + K_1 \\ & = xy + \psi(y) + K_1 \end{align} with $\psi(y)$ is a function of $y$ only and $K_1$ is a constant.
Note carefully that the constant of integration here is any differentiable function of $y$ denoted by $\psi(y)$ since any such function would vanish upon partial differentiation with respect to $x$ (just as any pure constant $C$ would vanish upon ordinary differentiation).
On the other hand, when integrating with respect to $y$, we have \begin{align} f(x,y) & = \int \frac{\partial f}{\partial y} \; dy + \phi(x) + K_2 \\ & = xy + y + \phi(x) + K_2 \end{align} with $\phi(x)$ is a function of $x$ only and $K_2$ is a constant.
Equating the two results we obtain \begin{align} xy + \psi(y) + K_1 & = xy + y + \phi(x) + K_2 \\ \psi(y) + K_1 & = y + \phi(x) + K_2 \\ \psi(y) & = y + \underbrace{\phi(x) + K_2 - K_1}_{\mathrm{constant}} \end{align} and because $\phi(x) + K_2 - K_1$ evaluates to a constant in function with respect to $y$ we clearly see that $\psi(y) = y$.
The same result is obtained by differentiating with respect to $y$ the following expression \begin{equation} \frac{\partial f}{\partial y} \psi(y) = \frac{\partial f}{\partial y} \left[f(x,y) - \int \frac{\partial f}{\partial x} \; dx + K_1\right] \end{equation} so that we have \begin{align} \psi'(y) & = \frac{\partial f}{\partial y} - \frac{\partial}{\partial y} \int \frac{\partial f}{\partial x} \; dx \\ & = 1 \end{align} and integrating with respect to $y$ we obtain $\psi(y)$ with \begin{align} \int \psi'(y) \; dy & = \int dy \\ & = y + C \end{align}