Can an algorithm be faster than O(1)?

Solution 1:

It is both reasonable and common to assume that any algorithm needs at least a positive constant amount of time for every input. For example, any useful algorithm should answer something (e.g. YES/NO or some number, string, etc.), and it is reasonable to assume that doing so takes at least some constant amount of time. Under this assumption, no algorithm can have a subconstant time complexity.

(In actual computers, this constant minimum amount of time may become smaller by advance in science and technology, but no matter how fast computers become, it is still there as a constant which does not depend on the input size.)

Vhailor comments that a hypothetical algorithm which waits 1/n seconds, where n is the input length, would satisfy the condition. The argument in the above assumes that no such algorithm exists. To justify this, I would argue that it is unreasonable to assume that a machine can wait 1/n seconds for arbitrary large n, because that would require faster and faster information processing as n grows.

Sometimes you may hear “subconstant-time operations,” but before it freaks you out, check what it really means. Usually, it means that the required time divided by some other parameter is subconstant, not that the time itself is subconstant.

Solution 2:

Others have already argued that in the context of sequential algorithms and classical models of computation (e.g., Turing machines or the RAM model), a running time of $0$ makes little sense. However, other models exist.


In the context of distributed algorithms, we usually define that the running time of an algorithm equals the number of communication rounds.

With this definition, we have got not only non-trivial graph algorithms with running time $O(1)$, but also some graph algorithms with running time $0$.

As a simple example, the randomised 2-approximation algorithm for maximum cut has running time $0$ in this model: there is no need to communicate with the neighbours; each node (simultaneously in parallel) just flips a coin and produces its own local output based on the random coin flip.


So it is not just the case that one could, hypothetically speaking, define a model of computation in which a running time of $0$ would make sense, if we abuse some contrived definition of "time".

We do have such models, they are actively studied, and the definition of "time" is natural for these purposes. In particular, with precisely this definition, time (number of communication rounds) and distance (shortest-path distance in the graph) become more or less equivalent concepts.

Solution 3:

The Big-O of an algorithm is the Big-O of the number of steps a Turing machine takes before halting. So consider the trivial Turing machine where the initial state is in the set of final states.

$ q_0 \in F $

The number of steps it takes before halting is 0.

Now consider the formal definition of Big-O:

$ f(x) = O(g(x)) $ if and only if $ \exists M>0, x_o $ such that $ |f(x)|\leq M|g(x)| $ for all $ x > x_o $

The Big-O of the trivial Turing machine is then $ O(0) $ which is "faster" than $ O(1) $ in some sense.