Zero Partials imply Constant Function Theorem or Proof

Obtain the general solution for the first equation: $$\frac{\partial f}{\partial x} = 0 \Rightarrow f(x,y) = k + g(y); $$

Substitute in the second equation: $$\frac{\partial }{\partial y}(k+g(y)) = g'(y) = 0$$

So?

It's not really something I thought about, but since you ask I'll try to give a rigorous explanation for the first step. What it comes down to is proving the existence and differentiability of $G(y) = k + g(y)$.

Start with asking yourself how you define $\frac{\partial f}{\partial x}(a,b)$. Ok, I'll say that it's $f_b'(a)$, where $f_b(x)$ is defined by $$f_b:\mathbb R \to \mathbb R \atop \quad\quad\quad\quad x\mapsto f(x,b)$$

for each $b\in\Bbb R$. Assume $f$ is differentiable in its entire domain (a suffiently "nice" domain), so this defines a function of two variables $\frac{\partial f}{\partial x}(x,y) := f'_y(x)$, in $\mathrm{dom} f$. Now, fix $y\in\mathbb R$ and apply what we know from calculus in one variable; that $f'_y(x) = 0\Leftrightarrow f_y(x) = k_y$, a constant. Here, $k_y$ is indexed to note that the constant depends on whichever $y$ you fixed.

So, existence comes from simply letting $G(y) = k_y$.

Differentiability is the tricky part. This may be overkill, and I'll spend some time thinking of a simpler way (to this whole discourse as well), but for now here goes. Note that $G(y)$ is related to the "flux" (not sure if this is the term in English) flow of the differential equation: $$\left\{\begin{align} h'(x) &= 0 \\ h(t_0) &= x_0\end{align}\right.$$

If the solution is $\phi(x;t_0,x_0)$, then we see that $G(y) = \phi(0;0,k_y)$ (we can evaluate $\phi$ at $0$, and set $t_0$ to $0$ because we know the solution is constant, and is unique). Now we bring in a theorem, which applies in this case, about the differentiability of $\phi$ with respect to all three of its variables, and obtain the differentiability of $G$.

I really, really hope that there's a simpler way, and that I'm overcomplicating things. It's something that happens when you study complicated things all day, you then see complication in everything.

Edit: (some things I thought along the way)

  • I assumed that $f$ was differentiable. If this is the case we can use the easier fact $Df = 0 \iff f\equiv\text{constant}$, which is proved using the generalized mean value theorem.

  • Actually, we supposedly just proved that $f$ is differentiable (we can drop "differentiable" where I used it and just say "partials exist"), because it is constant. So now we ask ourselves if it's easier to prove $\text{partials of $f$ are zero everywhere}\Rightarrow \text{$f$ differentiable}$, without going through this mess. If we can do this, again it's easier to take the approach in the above bullet.

  • Still, (and JLA seems to have proven it unecessary) the above proof still serves the purpose of getting conclusions out of the more general situation: $$\frac{\partial f}{\partial x} = p(x,y) \\ \frac{\partial f}{\partial y} = q(x,y)$$

Edit 2: (final note)

Reminded to me by Bill Cook. There's a theorem that goes: $$\text{partials of $f$ exist and are continuous}\Rightarrow \text{$f$ is differentiable}$$ Now apply this here, and then use the mean value theorem.


This should suffice: $\displaystyle f(b)-f(a)=\nabla f(\xi)\cdot(b-a)$ for some $\xi$ on the line connecting $a$ to $b\,,$ by the mean value theorem. But the right side is $0\,,$ so $f$ is constant.

Edit: One doesn't actually even have to use that the assumptions imply that $f$ is differentiable: Take two points $(a_x,a_y)\,,(b_x,b_y)\,.$ Then the assumptions on the partial derivatives and one dimensional calculus (usually by the mean value theorem, sometimes by the fundamental theorem of calculus) imply that $f(a_x,a_y)=f(b_x,a_y)=f(b_x,b_y)\,,$ so that $f$ is constant.