How to prove that $\,\,f\equiv 0,$ without using Weierstrass theorem?

Summary. In what follows, we show that, if $f(x_0)=2a>0$, for some $x_0\in [0,1]$, then $\int_0^1 f(x)\,M\mathrm{e}^{-M^2 x^2}dx>\pi a$, for sufficiently large $M$, and next approximate $M\mathrm{e}^{-M^2 x^2}$ by polynomials and obtain that for a suitable polynomial $p_N(x)$ approximating the power series of $M\mathrm{e}^{-M^2 x^2}$, we have that $\int_0^1 f(x)\,p_n(x)\,dx>\pi a/2$, which is a contradiction.

Proof. Assume that $f(x_0)\ne 0$, for some $x_0\in [0,1]$, and without loss of generality, that $f(x_0)>0$ and $x_0\in(0,1)$. Then there is an $\varepsilon>0$, such that: $$ x\in [x_0-\varepsilon,x_0+\varepsilon]\quad\Longrightarrow\quad f(x)\ge \frac{f(x_0)}{2}=a>0. $$ Now, take $h_M(x)=M\exp(-M^2x^2)$. Then $\int_{\mathbb R}h_M=\pi$, and for every $\delta>0$, $$ \lim_{M\to\infty} \int_{\lvert x\rvert>\delta}h_M(x)\,dx=0. $$ We have $$ \int_0^1 f(x)\,h_M(x-x_0)\,dx =\int_{x_0-\varepsilon}^{x_0+\varepsilon}f(x)\,h_M(x-x_0)\,dx+\int_{\lvert x-x_0\rvert>\varepsilon}f(x)\,h_M(x-x_0)\,dx. $$ Now $$ \int_{x_0-\varepsilon}^{x_0+\varepsilon}f(x)\,h_M(x-x_0)\,dx\ge a\int_{x_0-\varepsilon}^{x_0+\varepsilon}h_M(x-x_0)\,dx\to a\pi, $$ as $M\to \infty$, while $$ \int_{x_0-\varepsilon}^{x_0+\varepsilon}f(x)\,h_M(x-x_0)\,dx\ge \sup_{x\in []0,1]}|f(x)|\int_{|x|\ge \varepsilon}h(x)\,dx\to 0 $$ as $M\to \infty$. Therefore $$ \liminf_{M\to\infty}\int_0^1 f(x)\,h_M(x-x_0)\,dx\ge a\pi. $$ In particular, for some $M_0>0$, $$ \int_0^1 f(x)\,h_{M_0}(x-x_0)\,dx\ge \frac{a\pi}{2} $$ But $h_{M_0}(x-x_0)$ can be approximated in $[0,1]$ by polynomials. Simply using Taylor expansion we get $$ M_0\exp(-M^2_0x^2)=\sum_{n=0}^\infty \frac{-(1)^nM^{2n+1}_0x^{2n}}{n!} =\sum_{n=0}^N \frac{-(1)^nM^{2n+1}_0x^{2n}}{n!}+R_N(x), $$ where $$ \lvert R_N(x)\rvert=\frac{M^{2N+1}_0}{(N+1)!}\to 0, $$ as $N\to\infty$, that means $$ \lim_{N\to\infty}\sup_{x\in[0,1]} \lvert M_0\exp(-M^2_0x^2)-p_N(x)\rvert=0, $$ where $\displaystyle p_N(x)=\sum_{n=0}^N \frac{-(1)^nM^{2n+1}_0x^{2n}}{n!}$, and hence $$ \lim_{N\to\infty}\int_0^1 f(x)\,p_N(x)\,dx= \int_0^1 f(x)\,h_{M_0}(x-x_0)\,dx\ge \frac{a\pi}{2}, $$ which contradicts the fact that $\int_0^1 f(x)\,p(x)\,dx=0$, for every polynomial $p$.

Note. The idea above is based on the fact that, if $d_m(x)=M\mathrm{e}^{-M^2x^2}$ and $f$ continuous and bounded in $\mathbb R$, then
$$ \lim_{M\to\infty}\int_{\mathbb R}d_M(x) f(x)\,dx=f(0). $$


You could use the functional form of the Monotone Class Theorem. Let $\cal H$ be the collection of all bounded, Borel measurable functions $g$ so that $\int g(x) f(x)\, dx=0$. Then $\cal H$ is a monotone vector space.

Let $\cal K$ be the set of functions $\{x^k: k\in\mathbb{N}\}$. Then $\cal K$ is a multiplicative class contained in $\cal H$, so the Monotone Class Theorem says that $$b(\sigma({\cal K}))\subseteq {\cal H},$$ where $b(\sigma({\cal K}))$ is the space of all bounded functions measurable with respect to the $\sigma$-algebra generated by $\cal K$. Since $\cal K$ generates the Borel $\sigma$-algebra we deduce that $f\in{\cal H}$ and hence that $\int (f(x))^2\, dx=0$.


WLOG, suppose $f(x_0)>0$ for some $x_0\in(0,1)$. Since $f$ is continuous, there exists a small neighborhood of $x_0$, call $N(x_0)$, such that $f(x)>0$ for all $x\in N(x_0)$. Then we can always construct a small "pulse" $g(x)$ whose peak is around $x_0$, or $g(x)>0$ whenever $x\in N(x_0)$ and $|g(x)|<\varepsilon$ for some $\varepsilon$ otherwise. Then we can write $g(x)=\sum_{k=0}^na_kx^k+h(x)$ where $|h(x)|<\varepsilon$ whenever $x\in(0,1)$. Now consider $$\int_0^1g(x)f(x)dx=\sum_{k=0}^na_k\int_0^1x^kf(x)dx+\int_0^1h(x)f(x)dx=\sum_{k=0}^na_k\int_0^1x^kf(x)dx$$ The last equality holds by taking $\varepsilon\to0$ and hence $|h(x)|\to0$. On the other hand, $$\int_0^1g(x)f(x)dx=\int_{N(x_0)}g(x)f(x)dx+\int_{(0,1)\setminus N(x_0)}g(x)f(x)dx=\int_{N(x_0)}g(x)f(x)dx>0$$ The last equality holds also by letting $\varepsilon\to0$ and making $g(x)\to0$. Combine the results, $$\sum_{k=0}^na_k\int_0^1x^kf(x)dx>0$$ which contradicts the condition $\int_0^1x^kf(x)=0$ for all $k\in\mathbb N$.

Note that if $f(x_0)>0$ for $x_0=0$ or $x_0=1$, by continuity of $f$, we can find $x_0'\in(0,1)$ near $x_0$ such that $f(x_0')>0$, which follows the result above.