Solution 1:

This question is known as (indeterminate) moment problem and has been first considered by Stieltjes and Hamburger. In general, the answer to your question is: No, distributions are not uniquely determined by their moments.

The standard counterexample is the following (see e.g. Rick Durrett, Probability: Theory and Examples): The lognormal distribution

$$p(x) := \frac{1}{x\sqrt{2\pi}} \exp \left(- \frac{(\log x)^2}{2} \right)$$

and the "perturbed" lognormal distribution

$$q(x) := p(x) (1+ \sin(2\pi \log(x))$$

have the same moments.

Much more interesting is the question under which additional assumptions the moments are determining. @StefanHansen already mentioned the existence of exponential moments, but obviously that's a strong condition. Some years ago Christian Berg showed that so-called Hankel matrices are strongly related to this problem; in fact one can show that the moment problem is determinante if and only if the smallest eigenvalue of the Hankel matrix converges to $0$. For a more detailed discussion see e.g. this introduction or Christian Berg's paper.

Solution 2:

A construction of two discrete probability measures on $\mathbb{R}$ having the same moments is given in Varadhan's 'Probability Theory', page 23.

For those without access, one can consider the family of holomorphic functions $$\Pi_{0\leq n\leq N}\left(1 - \frac{z}{e^n} \right) $$ which converge uniformly on compact sets to define an entire function, say $$A(z) = \sum\limits_{m=0}a_mz^m. $$ Thus for each $n$, we have $$0 = \sum\limits_{m=0} a_m(e^n)^m = \sum\limits_{m=0}(e^m)^n \Re(a_m). \ \ \ \ \ (\star) $$ Define $f(m) = \max(0,\Re(a_m))$ and $g(m) = \max(0,- \Re(a_m))$. $f(m)$ and $g(m)$ are both nonnegative with $f(m) - g(m) = \Re(a_m)$ and we define two probabilities supported on $\lbrace e^k : k = 0, 1\dots\rbrace$ by $$ \mu\left(\lbrace e^k \rbrace\right) = \frac{f(k)}{\sum f(m)},\ \nu\left(\lbrace e^k \rbrace \right) = \frac{g(k)}{\sum g(m)}. $$ One can check, using the definition of $A$ and $(\star)$, that the sums in the denominators do actually converge, are positive and that they are in fact the same number, call it $S$.

That these probabilities above have well defined moments, for example $\sum\limits_{m}(e^m)^n\mu\left(\lbrace e^m\rbrace\right) = \sum\limits_{m}(e^n)^m\mu\left(\lbrace e^m\rbrace\right)$, follows from the convergence of $A$.

And finally, by another application of $(\star)$, we have $$\mathbb{E}_{\mu}[x^n] = \sum\limits_{k}(e^k)^n\mu\left(\lbrace e^k\rbrace\right) = \sum\limits_{k}(e^k)^n \frac{f(k)}{S} = \frac{1}{S}\sum\limits_{k}(e^k)^n f(k) = \frac{1}{S}\sum\limits_{k}(e^k)^n g(k) = \mathbb{E}_{\nu}[x^n].$$