Natural class of functions whose $\mathcal O$-sets are linearly ordered
In doing complexity analysis of algorithms, we often restrict to their asymptotic complexity. A function $g: \mathbb N \to \mathbb R_+$ is asymptotically bounded by $f: \mathbb N \to \mathbb R_+$ if there are constants $c \in \mathbb R_+, N \in \mathbb N$ such that for each $n \geq N$, $$ g(n) \leq c\cdot f(n). $$ The set of all such $g$ is referred to as $\mathcal O(f)$. By letting $g \preceq f$ if $\mathcal O(g) \subseteq \mathcal O(f)$, we obtain a preordering on the set of functions $\mathbb N \to \mathbb R_+$.
Examples of functions that typically come up as complexities of algorithms are: $$\tag{*} \log(\log(n)), \log(n),\log(n)^2, \sqrt{n}, n, n\log(n),n\sqrt{n},n^3, 2^n, 3^n, 2^{2^n}. $$ Interestingly, these functions are all linearly ordered by $\preceq$. Note that it is not hard to find functions which are not linearly ordered by $\preceq$, like $|\sin(n)|$ and $|\cos(n)|$, and with some fiddling it would not be too hard to make examples of strictly increasing (or even convex) functions which are not linearly ordered by $\preceq$. However, these examples all seem artificial, and in particular I would be surprised if they ever came up as complexities of algorithms.
Can we explicitly give a nice class of functions that includes at least all the functions listed in (*), and for which $\preceq$ is a linear ordering? Is this class also closed under some operations, like multiplication, exponentiation by a constant, or taking logarithms? And, perhaps, could we even justify why any algorithm is likely to have a complexity in this class?
Solution 1:
The set of functions created from constants, field operations and allowing exp and log is linearly ordered by big-O. They are called exp-log functions. Take a look at Hardy's "Orders of Infinity", page 24. For more about this, search for "Hardy fields".
This class covers most "typical" complexities. However, the world of complexity is much richer than any simple description that can be written down. Complexity can be:
- Extremely slow-growing: There are algorithms whose complexity uses inverse Ackermann (union-find) or $\log^{\ast} n$ (distributed tricoloring of a tree). Those functions grow slower than $\log(\log(\dots n\dots))$ for any finite number of logs.
- Extremely fast-growing: Some problems cannot be solved in time that is $\exp(\exp(\dots n \dots))$, for any finite number of $n$. They are called nonelementary. The Ackermann function grows too fast, an according to this answer there are natural problems with Ackermannian complexity.
- Arbitrary: For most sensible functions $F$, it's possible to have an algorithm with complexity $F(n)$, that gets a Turing machine and simulates it for $F(n)$ steps. For example, you can obtain an algorithm with complexity $n^{5+(-1)^n}$. You can also define (artificially) a problem that can be solved in this time.
- Ill-defined: Blum's speedup theorem shows that there exists a computable function $f$ such that, if $f$ can be computed in time $X$, then $f$ can computed in time $\log X$. There is no "best" algorithm to compute $f$. It's not "natural" because it's a diagonalization against possible algorithms, but shows how weird this can get.