Nonzero $f \in C([0, 1])$ for which $\int_0^1 f(x)x^n dx = 0$ for all $n$

As the title says, I'm wondering if there is a continuous function such that $f$ is nonzero on $[0, 1]$, and for which $\int_0^1 f(x)x^n dx = 0$ for all $n \geq 1$. I am trying to solve a problem proving that if (on $C([0, 1])$) $\int_0^1 f(x)x^n dx = 0$ for all $n \geq 0$, then $f$ must be identically zero. I presume then we do require the $n=0$ case to hold too, otherwise it wouldn't be part of the statement. Is there ay function which is not identically zero which satisfies $\int_0^1 f(x)x^n dx = 0$ for all $n \geq 1$?

The statement I am attempting to prove is homework, but this is just idle curiosity (though I will tag it as homework anyway since it is related). Thank you!


Solution 1:

As an aside, the answer is yes if the interval is $(0,\infty)$ instead of $(0,1)$. For example the "Stieltjes ghost function" $f(x) = \exp(-x^{1/4}) \sin x^{1/4}$ satisfies $\int_0^{\infty} f(x) x^n dx = 0$ for all integers $n \ge 0$.

Stieltjes gave this as an example of a case where the moment problem does not have a unique solution. It appears in Section 55 of his famous paper "Recherches sur les fractions continues" from 1894; see p. 506 in Œuvres Complètes, Vol. II. To compute the moments, use the substitution $x=u^4$ to write $I_n = \int_0^{\infty} f(x) x^n dx = 4 \int_0^{\infty} e^{-u} \sin(u) u^{4n+3} du$; then integrate by parts four times (differentiating the power of $u$, and integrating the rest) to show that $I_n$ is proportional to $I_{n-1}$, and finally check that $I_0=0$.

Solution 2:

The answer is no. Actually I believe the following is a theorem whose name totally escapes me at the moment: assume that $f$ is continuous and let $a_n$ be a sequence of increasing positive integers such that $\int_0^1 f(x) x^{a_n} \, dx = 0$ for $n \ge 1$. If $\sum \frac{1}{a_n}$ diverges, then $f$ is identically zero! (Edit: this is a corollary of the Müntz–Szász theorem - thanks, Moron!)

In other words, the problem isn't phrased the way it is because stronger statements are false; the stronger versions are just harder to prove.

Solution 3:

(I am turning this into Community wiki, since the original version made an obvious mistake).

The result follows, for example, from the Stone-Weierstrass theorem, once one justifies that the limit of some integrals is the integral of the limit, which can be done (overkill) using Lebesgue's dominated convergence theorem or (more easily) using simple estimates from the fact that $f$ is bounded, since it is continuous.

Below I give full details, which you should probably not read until after your homework is due, since this also solves your homework.


Spoilers:

There is a sequence of polynomials $p_n(x)$ that converges uniformly to $xf(x)$ on ${}[0,1]$. We have $\int_0^1xf(x)p_n(x)dx=0$ for all $n$, by assumption, since $xp_n(x)$ is a sum of monomials the integral of whose integral with $f$ is 0. Now take the limit as $n\to\infty$ to conclude that $\int_0^1x(f(x))^2dx=0$.

This gives us that $f=0$ because if $f(x_0)\ne 0$, continuity ensures a positive $\epsilon>0$ and an interval $(a,b)$ with $a>0$ such that $|f(x)|\ge\epsilon$ for all $x\in(a,b)$. But then $\int_0^1xf(x)^2dx\ge la\epsilon^2>0$, where $l=b-a$ is the length of the interval.

To see that the limit of the integrals is 0 without using dominated convergence, let $M\ge|f(x)|$ for all $x\in[0,1]$. The, for any $\delta>0$, if $n$ is large enough, we have $$\int_0^1f(x)xp_n(x)dx=\int_0^1f\times(p-xf+xf)dx=\int_0^1xf(x)^2dx+\int_0^1f\times(p-xf)dx,$$ and the second integral is bounded by $\int_0^1|f||p-xf|dx\le M(\delta/M)=\delta$.

In fact, even this is approach is an overkill. (For example, Müntz's theorem gives a more general fact, as already mentioned in another answer.)

(Apologies for the original mistake.)