I have a question on how to put a PDE into weak form, and more importantly, how to properly choose the space of test functions. I know that for an elliptic problem, we want to start with a problem like $Lu = f$, multiply by a smooth test function $v$, integrate by parts typically and we end up with a bilinear form $a(u,v) = l(v)$, where $a$ is coercive and bounded and $l$ is a bounded functional on some Hilbert space $H$. Then, Lax Milgram will tell us there exists a unique solution $u \in H$ of the above problem for all $v \in H$. My question is: how do we properly choose the Hilbert space $H$?

An example from a book of mine: if we have $-\Delta u = f$ on $\Omega$ with the condition that $u = 0$ on $\partial \Omega$, then we multiply by a smooth function $v$, and integrate by parts to arrive at

$$\int_{\Omega} \nabla u \nabla v dx = \int_{\Omega} fv dx \text{ for all } v \in H^1_0(\Omega).$$ I certainly see that choosing $H^1_0(\Omega)$ sounds reasonable, as that condition ensures that $v$ satisfies the boundary condition and also that the bilinear form $a(u,v)$ makes sense (i.e., the integral of $\nabla u \nabla v$ makes sense). Is this the only possibility for a Hilbert space we could choose of test functions? What if we know ahead of time that our solution $u$ is extremely smooth (let's say our data $f$ is $C^{\infty}$ for example). Would it be permissible, albeit un-necessary to choose our test function space to be $H^2_0(\Omega)$? What are the criteria for choosing this space of test functions? Do we choose the test function space to A: make sure the bilinear form $a(u,v)$ makes sense (i.e., we can differentiate the test functions enough times and they're still in $L^2$) and B: they satisfy the boundary conditions?

I apologize if this question is unclear, and thanks for any help!


I agree with motivations A and B you gave in the text. Even more disappointingly, I would say that one simply chooses a Hilbert space instead of another because it works. So one chooses the least possible regularity and the simplest way to encode boundary conditions.

In the case of the Dirichlet problem, for example: $$\tag{(D)} \begin{cases} -\Delta u = f & \Omega \\ u= 0 & \partial \Omega\end{cases},$$ a solution $u$, whatever it is, must be something that realizes $$b(u, v)=0,\quad \forall v \in \text{some test function space}, $$ where $$b(u, v)=\int\left(-\Delta u - f\right)v\, dx, $$ whenever this makes sense. Turns out that, if we require $u, v\in H^1_0(\Omega)$, then $b$ takes on a super-nice form, the Lax-Milgram's theorem kicks in, everything goes on smoothly and our lives are beautiful. What if we had taken $H^2_0$ instead? Well, in this case we would have had trouble because, even in the simplest case $f=0$, the quadratic form $$b(u,u)=\int \lvert \nabla u \rvert^2\, dx $$ is not coercive, because it cannot control second derivatives. So the Lax-Milgram's theorem doesn't apply and our lives are miserable.


(A last remark which may possibly contradict everything above. As far as I know, there is an abstract theory of linear operators and quadratic forms on Hilbert spaces which, among other things, proclaims that $H^1_0$ is the "right" domain for the quadratic form $b(u,u)$ when $L$ is the Laplacian. If you really are interested in this you could look for the keywords "form domain of self-adjoint operators" or "Friedrichs extension". I am sure that those things are treated in Reed & Simon's Methods of Modern Mathematical Physics and in Zeidler's Applied Functional Analysis.)

EDIT: This answer is related to the last remark. The book by Davies explains this abstract theory IMHO very clearly.

P.S.: The dichotomy "our lives are beautiful / our lives are miserable" is a citation of J.L. Vázquez.