In calculus we've been introduced first with indefinite integral, then with the definite one. Then we've been introduced with the concept of double (definite) integral and multiple (definite) integral. Is there a concept of double (or multiple) indefinite integral? If the answer is yes, how is its definition, and why we don't learn that? If the answer is no, why it is so?


The answer is affirmative. If it is assumed in the sequel that all functions are "neat", then we have: $$ u(p,q) = \iint f(p,q)\, dp\, dq = \iint f(p,q)\, dq\, dp \\ \Longleftrightarrow \quad \frac{\partial^2}{\partial q \, \partial p} u(p,q) = \frac{\partial^2}{\partial p \, \partial q} u(p,q) = f(p,q) $$ In particular, if the cross partial derivatives are zero: $$ \frac{\partial^2}{\partial q \, \partial p} u(p,q) = \frac{\partial^2}{\partial p \, \partial q} u(p,q) = 0 $$ Do the integration: $$ u(p,q) = \iint 0 \, dq\, dp = \int \left[ \int 0 \, dq \right] dp = \int f(p) \, dp = F(p) $$ On the other hand: $$ u(p,q) = \iint 0 \, dp\, dq = \int \left[ \int 0 \, dp \right] dq = \int g(q) \, dq = G(q) $$ Because $\;\partial f(p)/\partial q = \partial g(q)/\partial p = 0\,$ : that's the meaning of "independent variables".
We conclude that the general solution of the PDE $\;\partial^2 u/\partial p \partial q = \partial^2 u/\partial q \partial p = 0\;$ is given by: $$ u(p,q) = F(p) + G(q) $$ This result is more interesting than it might seem at first sight.

Lemma. Let $a\ne 0$ and $b\ne 0$ be constants (complex eventually) , then: $$ \frac{\partial}{\partial (ax+by)} = \frac{1}{a}\frac{\partial}{\partial x} + \frac{1}{b}\frac{\partial}{\partial y} = \frac{\partial}{\partial ax} + \frac{\partial}{\partial by} $$ Proof with a well known chain rule for partial derivatives (for every $u$): $$ \frac{\partial u}{\partial (ax+by)} = \frac{\partial u}{\partial x}\frac{\partial x}{\partial (ax+by)} + \frac{\partial u}{\partial y}\frac{\partial y}{\partial (ax+by)} $$ Where: $$ \frac{\partial x}{\partial (ax+by)} = \frac{1}{\partial (ax+by)/\partial x} = \frac{1}{a} \\ \frac{\partial y}{\partial (ax+by)} = \frac{1}{\partial (ax+by)/\partial y} = \frac{1}{b} $$ Now consider the following partial differential equation (wave equation): $$ \frac{1}{c^2}\frac{\partial^2 u}{\partial t^2} - \frac{\partial^2 u}{\partial x^2} = 0 $$ With a little bit of Operator Calculus, decompose into factors: $$ \left[ \frac{\partial}{\partial c t} - \frac{\partial}{\partial x} \right] \left[ \frac{\partial}{\partial c t} + \frac{\partial}{\partial x} \right] u = \left[ \frac{\partial}{\partial c t} + \frac{\partial}{\partial x} \right] \left[ \frac{\partial}{\partial c t} - \frac{\partial}{\partial x} \right] u = 0 $$ With the above lemma, this is converted to: $$ \frac{\partial}{\partial (x-ct)}\frac{\partial}{\partial (x+ct)} u = \frac{\partial}{\partial (x+ct)}\frac{\partial}{\partial (x-ct)} u = 0 $$ With $p = (x-ct)$ and $q = (x+ct)$ as new independent variables. Now do the integration and find that the general solution of the wave equation is given by: $$ u(x,t) = F(p) + G(q) = F(x-ct) + G(x+ct) $$ Interpreted as the superposition of a wave travelling forward and a wave travelling backward.

Very much the same can be done for the 2-D Laplace equation: $$ \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} = 0 $$ Decompose into factors (and beware of complex solutions): $$ \left[ \frac{\partial}{\partial x} + i \frac{\partial}{\partial y} \right] \left[ \frac{\partial}{\partial x} - i \frac{\partial}{\partial y} \right] u = \left[ \frac{\partial}{\partial x} - i \frac{\partial}{\partial y} \right] \left[ \frac{\partial}{\partial x} + i \frac{\partial}{\partial y} \right] u = 0 $$ This is converted to: $$ \frac{\partial}{\partial (x+iy)}\frac{\partial}{\partial (x-iy)} u = \frac{\partial}{\partial (x-iy)}\frac{\partial}{\partial (x+iy)} u = 0 $$ With $\;z=x+iy\;$ and $\;\overline{z}=x-iy\;$ as new, complex, independent variables.
Now do the integration: $$ u(x,y) = F(z) + G(\overline{z}) $$ The solutions are related to holomorphic functions in the complex plane.


IMHO, this question is rather deep, but admits a positive answer. Rather than attempting to answer it, though, I'll try to give some intuitions and point the interested people in the right direction.

One variable case.

An indefinite integral $\int f(x) \, dx$ is understood as a function $F$ which helps evaluate the definite integral over an interval $[a,b]$ in the following way: given the numbers $a$ and $b$, $$\int_a^b f(x) \, dx = F(b) - F(a).$$ The operation in the RHS of the last equation is significantly simpler than the equation in the left (which is a limit operation). Thus, knowledge of the indefinite integral $F$ is of great help when evaluating integrals.

Notions for generalization.

This concept admits a certain generalization to multivariate calculus in the context of Stoke's theorem. (I will be handwavy in this part, but I will point to a rigorous source at the end.)

This time, though, there won't be a function as magical as the one from before, which you could evaluate in two points to get an answer. Rather, the generalization attempts to imitate the following behavior: if $f=F',$ by the fundamental theorem of calculus, $$\int_a^b F'(x) \, dx = F(b) - F(a).$$ Notice that the points $a$ and $b$ form the border of the interval $[a,b]$, so you could say that integrating $F'$ over an interval amounts to evaluating $F$ over the border of that interval. Note also that the signs obey a rule: the border-point which lies towards the "positive" direction of the interval gets the plus sign, and the other direction gets the minus sign.

Now imagine a 3-D context, where you want to integrate a three-variable function $f(x,y,z)$ over the unit ball. Even if you find a "function" $F$ which in some way satisfies a proper generalization of "$f = F'$", you now have an infinite number of points in the border. How is $F$ used in this case?

This difficulty is overcome by somehow integrating the values of $F$ values along the border of the ball (that is, the unit sphere). Special attention must be given to the "signs" which correspond to each point too, much like in the 1-dimensional case. These should be considered inside the integral along the border.

The theorems.

So, with these ideas in mind, you can check the divergence theorem, a special case of Stoke's theorem for the three-variable case. Continuing with our 3-D example, if $B$ is the unit ball and $\partial B$ is its border: $$\int_B \nabla \cdot \mathbf{F}(\mathbf{x}) \, dx\, dy\, dz = \int_{\partial B} \mathbf{F}(\mathbf{x})\cdot\mathbf{n}(\mathbf{x})\, dS(\mathbf{x}).$$

Here, the right generalization of

realizing that $f$ is the derivative of some function $F$ (the indefinite integral from the 1-D case)

is

realizing that $f$ is the divergence of some vector field $\mathbf{F}$,that is, $f = \nabla \cdot \mathbf{F}$.

Similarly, the right analogues for the "signs" depending on "positive/negative ends of the interval" that weigh the points $\mathbf{x}$ turn out to be the "directions normal to the surface", denoted by $\mathbf{n}(\mathbf{x})$, which project the values of the vector field $\mathbf{F}(\mathbf{x})$, "weighing" them in the appropriate direction.

Important diference.

Now, this identity successfully states that the evaluation of the triple integral in the LHS amounts to evaluating a surface integral (double integral) in the RHS. However, nothing guarantees that the operation in the right will be easier to carry out. Whether or not this conversion is helpful or computationally convenient will depend on context, and you could even use it the other way round if it is more convenient.

I hope to have convinced you that here is much more to this than what can be covered in a single answer. If you want to learn about these topics in a rigorous way, I recommend reading a book on "calculus on manifolds", like Bachman's. You'll learn about integrating differential forms, and about exact differential forms, which are the forms which admit this kind of generalization of "indefinite integral".


Now I realized that as an indefinite double integral, we use the concept of partial differential equations (where $z=z(x,y)$)! While we find the solution of these equations, we do just the same that we do while we want to find the primitive of an one variable function! Also, while we solve an indefinite integral of an one variable integral, in the solution we have a constant, and in the same logic, while we solve PDE of a two variable function, we get two constants $c_1$ and $c_2$.