Using Limits to Determine Big-O, Big-Omega, and Big-Theta

I am trying to get a concrete answer on using limits to determine if two functions, $f(n)$ and $g(n)$, are Big-$O$, Big-$\Omega$, or Big-$\Theta$. I have looked at my book, my lecture notes, and have even done some online research but I still haven't found anything that gives me a solid yes or no. From what I understand about each of the three notations I have come up with 3 sets of rules:

$\displaystyle f(n) = O(g(n)) \implies \lim_{n \to \infty}\;\frac{f(n)}{g(n)} = 0$

$\displaystyle f(n) = \Omega(g(n)) \implies \lim_{n \to \infty}\;\frac{f(n)}{g(n)} = \infty$

$\displaystyle f(n) = \Theta(g(n)) \implies 0 < \; \lim_{n \to \infty} \; \frac{f(n)}{g(n)} \;< \infty$

The reason I am trying to get such a definite answer on this is because for a HW assignment we have to briefly explain why $\;f(n) = O(g(n))$, $\;f(n) = \Omega(g(n)),$ or $\;f(n) = \Theta(g(n))$. If I can just use those 3 rules above my explanations will be short, sweet, and to the point.

Any help or advice would be great appreciated


The big/little O/Ω/Θ notation is not defined or, indeed, properly definable in terms of limits. In particular, it's possible e.g. that $f(n) = \Theta(g(n))$ even though $f(n) \mathbin/ g(n)$ does not converge to a limit.

(For a simple counterexample, pick any function $g(n) > 0$ and let $f(n) = (2 + (-1)^n)\, g(n)$. In other words, $f(n) = g(n)$ for odd $n$ and $f(n) = 3g(n)$ for even $n$.)

That said, when the ratio $f(n) \mathbin/ g(n)$ does converge (or tend to infinity), we can determine the asymptotic relationship between $f(n)$ and $g(n)$ from the value of the limit. In particular, as noted in the handy Asymptotics Cheat Sheet by Tom Leighton and Ronitt Rubinfeld, as linked by bigworld12 in the comments above,

The definitions of the various asymptotic notations are closely related to the definition of a limit. As a result, $\lim_{n→∞} f(n) \mathbin/ g(n)$ reveals a lot about the asymptotic relationship between $f$ and $g$, provided the limit exists. The table below translates facts about the limit of $f \mathbin/ g$ into facts about the asymptotic relationship between $f$ and $g$. $$\begin{array}{lllllll} \lim_{n→∞} f(n)\mathbin/g(n) \ne 0,∞ &\implies& f = Θ(g) && \lim_{n→∞} f(n)\mathbin/g(n) = 1 &\implies& f ∼ g \\ \lim_{n→∞} f(n)\mathbin/g(n) \ne ∞ &\implies& f = O(g) && \lim_{n→∞} f(n)\mathbin/g(n) = 0 &\implies& f = o(g) \\ \lim_{n→∞} f(n)\mathbin/g(n) \ne 0 &\implies& f = Ω(g) && \lim_{n→∞} f(n)\mathbin/g(n) = ∞ &\implies& f = ω(g) \\ \end{array}$$


Ilmari's answer is roughly correct, but I want to say that limits are actually the wrong way of thinking about asymptotic notation and expansions, not only because they cannot always be used (as Did and Ilmari already pointed out), but also because they fail to capture the true nature of asymptotic behaviour even when they can be used.

Note that to be precise one always has to specify exactly what limiting variables the asymptotic behaviour is with respect to. In computer science it is usually as some size parameters tend to infinity. In mathematics it is often as some threshold parameter tends to zero, or as some point tends to another point.

$f(n)∈O(g(n))$ as $n→∞$ iff for some $c∈\mathbb{R}$ we have $|f(n)| ≤ c·|g(n)|$ as $n→∞$.

$f(n)∈Ω(g(n))$ as $n→∞$ iff for some $c∈\mathbb{R}_{>0}$ we have $|f(n)| ≥ c·|g(n)|$ as $n→∞$.

$f(n)∈o(g(n))$ as $n→∞$ iff for every $c∈\mathbb{R}_{>0}$ we have $|f(n)| ≤ c·|g(n)|$ as $n→∞$.

$f(n)∈ω(g(n))$ as $n→∞$ iff for every $c∈\mathbb{R}$ we have $|f(n)| ≥ c·|g(n)|$ as $n→∞$.

$f(n)∈Θ(g(n))$ as $n→∞$ iff both $f(n)∈O(g(n))$ and $f(n)∈Ω(g(n))$ as $n→∞$.

To be fully precise one should also specify the type of the limiting variable ($n$ in this case), though it is usually omitted if clear from the context (and "$n$" is usually reserved for integers).

Likewise for statements of asymptotic behaviour "as $x→0$" instead of "as $n→∞$". Note that these definitions work even when $f,g$ are complex-valued.

To make the above definitions clear, a statement of the form "$A(n)≤B(n)$ as $n→∞$" with $n∈\mathbb{N}$ means "for some $m∈\mathbb{N}$ we have ( $A(n)≤B(n)$ for every $n∈\mathbb{N}_{>m}$ )".

For example, $3n^2+4n ∈ O(n^2)$ as $n→∞$ because $|3n^2+4n| ≤ 7·|n^2|$ for every $n∈\mathbb{N}_{>1}$. And this is how you should think of asymptotic behaviour; it merely hides a constant (in this case $7$) in the inequality between the absolute values.

Finally, many useful asymptotic results hold where the limit does not exist, and it is far easier to use the original definition even if the limit does exist. Just for example, bubble-sort always takes $O(n^2)$ comparisons, because there are at most $n$ outer loop iterations, each of which runs the inner loop for at most $n$ iterations. On a sorted list, bubble-sort takes only $O(n)$ comparisons, so the limit does not hold. Even if you look at the worst-case, it is harder (and silly) to compute the limit of the ratio of the worst-case number of comparisons over $n^2$...

Similarly, many numerical algorithms such as Newton-Raphson approximation guarantee certain convergence behaviour of the algorithm, but by no means guarantee any sort of limit. After all, the algorithm may converge faster than guaranteed!