How to prove $\log n < n$?

Sorry if this is a silly question but most books claim $\log n < n$ for $n \geq 1$ without providing any proof, saying it's too obvious. Could someone give me a rigorous proof? Is there some trick or method I forgot about?


If we let $f(n) = \log n$ and $g(n) = n$, then $f'(n) = \frac{1}{n}$ and $g'(n)=1$. Since $f(1) < g(1)$ and $f'(n) \leq g'(n)$ for $n \geq 1$, we must have $f(n) < g(n)$ for all $n \geq 1$.

The idea in the last sentence is sometimes called the racetrack principle: If $f(n)$ and $g(n)$ denote the positions of horses $f$ and $g$ at time $n$, horse $f$ starts behind horse $g$ (i.e, $f(1) < g(1)$), and at any given time horse $f$ is never faster than horse $g$ (i.e, $f'(n) \leq g'(n)$) then horse $f$ will always be behind horse $g$ (i.e, $f(n) < g(n)$ for $n \geq 1$).


OP asks for a proof of the racetrack principle.

The racetrack principle: If $f(a) < g(a)$ and $f'(n) \leq g'(n)$ for $n \geq a$, then $f(n) < g(n)$ for $n \geq a$.

Proof: Let $h(n) = f(n) - g(n)$. Then $h'(n) = f'(n) - g'(n) \leq 0$. The mean value theorem tells us that there exists some point $x \in [a,n]$ such that $$h'(x) = \frac{h(n) - h(a)}{n-a}.$$ Since we know the claim is true for $n=a$, take $n > a$. We already know $h'(x) \leq 0$, so we get $h(n) - h(a) \leq 0$, which means $f(n) - g(n) \leq f(a) - g(a) < 0$, and so $f(n) < g(n)$ for $n \geq 1$.


Just simply look at the function $f(t)=t-\log(t)$. You can show that this function is always increasing and that $f(n)\ge f(1)=1$ for every $n$.


For $n\geq 1$, $\log n < n$ iff $$n < e^n = 1 + n + n^2/2! + n^3/3! + \cdots$$


How rigorous do you need? If it's rigorous enough in your context, prove that $\log(1) < 1$ (shouldn't be hard!), and then show that the derivative of $\log(x)$ is less than or equal to the derivative of $x$ for all $x \geq 1$.


In your comments you seem ask about this in the context of the big O notation -- e.g., the concept frequently used in computer science that when analyzing an algorithm for time (or resource consumption like RAM). In big O notation, the following order appears (with constant time as best, and exponential time being worst):

  • $O(1)$ - constant time
  • $O(\log N)$ - logarithmic
  • $O(N)$ - linear
  • $O(N \log N)$ - loglinear
  • $O(N^2)$ - quadratic (followed by other polynomial times e.g., $O(N^3)$ - cubic, etc.)
  • $O(2^N)$ - exponential time

You aren't going to come up with a mathematical proof for this ordering that shows for every $N$ that a $O(\log N)$ function will be smaller than every $O(N)$ function, because it simply isn't true. To demonstrate with a counterexample, let $f(N) = 10^{100} \log N$ (an $O(\log N)$ algorithm; you ignore the constant multiplier), and let $g(N) = N$ ($O(N)$ algorithm). While $N \lt 10^{98}$, $f$ the logarithmic function will be larger (and hence slower; less optimal) than $g$ the linear-time function, opposite to what you usually expect.

The point of big O notation is that the scaling what usually matters most is how the functional will scale for large N. Comparing any logarithmic and linear function, the logarithmic function will always be smaller than the linear function for all values of $N$ larger than some finite number. You would say that a $O(\log N)$ function grows asymptotically slower than a $O(N)$ function.

Note in many cases like comparing $f(N) = N$ and $g(N) = \log N$ it will be true over the entire domain of $N$ (equivalent to $N \gt 0$).

The power of big-O notation is that is if you know you have an $O(N)$ and an $O(\log N)$ function and both take 2 seconds to do when $N = 100$, when you need to process $N = 100000$ cases, the O(N) function will worst-case take roughly 2000 seconds, but the $O(\log N)$ function will take only about 5 seconds.