Big $\mathcal{O}$ Notation question while estimating $\sum \frac{\log n}{n}$

I have a function $S(x)$ which is bounded above and below as follows:

$f(x) + C_1 + \mathcal{O}(g(x)) < S(x) < f(x) + C_2 + \mathcal{O}(g(x))$ as $x \rightarrow \infty$

Can I conclude that $$S(x) = f(x) + C + \mathcal{O}(g(x))$$ as $x \rightarrow \infty$? Can anyone give a short proof for this fact?

EDIT:

(To clear up the confusion, I am stating the problem below. I am wondering if what I did was right.)

Prove that as $x \rightarrow \infty$

$$\displaystyle \sum_{n \leq x} \frac{\log n}{n} = \frac{(\log x)^2}{2} + C + \mathcal{O}(\frac{\log(x)}{x})$$

where $C$ is a constant.

This is how I did it.

Approximate the summation as an integral $\int \frac{\log(x)}{x} dx$ from above and from below. (The usual way of coming up with bounds for $p$ series). Then we will get the lower bound as $\frac{\log([x])^2}{2} + C_1$ and the upper bound as $\frac{\log([x])^2}{2} + C_2$.

Then, $\log([x]) = \log(x) + \log(\frac{[x]}{x}) = \log(x) + \log(1-\frac{\{x\}}{x}) = \log(x) + \mathcal{O}(\frac{\{x\}}{x}) = \log(x) + \mathcal{O}(\frac{1}{x})$.

Plugging this into the above I get the the question I asked.

I did this when I was asked on a timed exam since I could not think of other ways to get to the answer. Other answers and suggestions welcome as always.


Solution 1:

A misconception needs to be cleared up here. Recall that $f(x) = O(g(x))$ means that there exists a constant $C$ such that $|f(x)| \le C |g(x)|$ in a neighborhood of whatever value of $x$ you're interested in. It is an abuse of notation that we use the equality symbol here at all; $O(g(x))$ is not a function, so it cannot be "equal" to $f(x)$ in the usual sense. See the discussion at the Wikipedia article.

Based on this definition, people sometimes say that $f(x) = h(x) + O(g(x))$ if $f(x) - h(x) = O(g(x))$. This is an additional abuse of notation, but it generally does not create problems and can be extremely convenient. I guess you could write $f(x) < h(x) + O(g(x))$ if you really wanted to, but this is redundant, since the inequality is already built into the definition of big-O notation.

But I have no idea what $f(x) > h(x) + O(g(x))$ is supposed to mean. If you are trying to say that there is a constant $C$ such that $|f(x) - h(x)| > C |g(x)|$, the correct way to write this is using big-Omega, not big-O, as $f(x) = h(x) + \Omega(g(x))$.

Solution 2:

Since you wanted a different proof approach, you can try using Abel's Identity, which has turned out to be quite useful in analytic number theory.

For instance see this: An estimate for sum of reciprocals of square roots of prime numbers.

To apply this to your question:

Since we know that

$\displaystyle \sum_{1 \le n \le x} \frac1{n} = \log x + \gamma + R(x)$, where $\displaystyle R(x) = \mathcal{O}\left(\frac1{x}\right)$

Using Abel's identity we have that

$\displaystyle \sum_{1 \le n \le x} \frac{\log n}{n} = (\log x+ \gamma + R(x))\log x - \int_{1}^{x} \frac{\log t+ \gamma + R(t)}{t} \ \text dt$

i.e.

$\displaystyle \sum_{1 \le n \le x} \frac{\log n}{n} = \frac{\log^2 x}{2} + R(x)\log x - \int_{1}^{x} \frac{R(t)}{t} \ \text dt$

Since $\displaystyle R(x) = \mathcal{O}\left(\frac1{x}\right)$ we have that $\displaystyle \int_{1}^{\infty} \frac{R(t)}{t} \ \text dt = \eta$ exists. We also have that $\displaystyle R(x) \log x \to 0 \ \text{as} \ x \to \infty$.

Thus we have that

$\displaystyle \sum_{1 \le n \le x} \frac{\log n}{n} = \frac{\log^2 x}{2} -\eta + \mathcal{O}\left(\frac{\log x}{x}\right) \ \ \text{as} \ \ x \to \infty$

Another useful approach is to try using the Euler-Maclaurin Summation formula.

Solution 3:

I don't think so because $g(x)$ might be too small. Take $f(x)=0, C_1=-1.5, C_2=1.5, g(x)=\exp(-x)$. Then doesn't $S(x)=\sin(x)$ violate it?