Is this technically an O(1) algorithm for "Hello World"?

Would this be classified as an O(1) algorithm for "Hello, World!" ??

public class Hello1
{
   public static void Main()
   {
      DateTime TwentyYearsLater = new DateTime(2035,01,01);
      while ( DateTime.Now < TwentyYearsLater )
      { 
          System.Console.WriteLine("It's still not time to print the hello ...");
      }
      System.Console.WriteLine("Hello, World!");
   }
}

I'm thinking of using the

DateTime TwentyYearsLater = new DateTime(2035,01,01);
while ( DateTime.Now < TwentyYearsLater )
{ 
   // ... 
}

snippet of code as a busy loop to put in as a joke whenever someone asks for an algorithm of a certain complexity. Would this be correct?


Big O notation in this context is being used to describe a relationship between the size of the input of a function and the number of operations that need to be performed to compute the result for that input.

Your operation has no input that the output can be related to, so using Big O notation is nonsensical. The time that the operation takes is independent of the inputs of the operation (which is...none). Since there is no relationship between the input and the number of operations performed, you can't use Big O to describe that non-existent relationship


Big-O notation means roughly 'given an operation on an amount of work, N, how much calculation time, proportional to N, does the algorithm take?'. Eg, sorting an array of size N can take N^2, Nlog(N), etc.

This has no amount of input data to act on. So it's not O(anything).

Even worse; this isn't technically an algorithm. An algorithm is a method for computing the value of a mathematical function -- maths functions are a mapping from one input to an output. Since this takes no input and returns nothing, it's not a function, in the mathematical sense. From wikipedia:

An algorithm is an effective method that can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state.

What this is, technically, is a control system. From wikipedia;

A control system is a device, or set of devices, that manages, commands, directs or regulates the behavior of other devices or systems.

For people wanting a more in-depth answer about the difference between mathematical functions and algorithms, and the more powerful abilities of computers to do side-effecting things like console output, displaying graphics, or controlling robots, have a read of this paper on the Strong Church-Turing Hypothesis

Abstract

The classical view of computing positions computation as a closed-box transformation of inputs (rational numbers or finite strings) to outputs. According to the interactive view of computing, computation is an ongoing interactive process rather than a function-based transformation of an input to an output. Specifically, communication with the outside world happens during the computation, not before or after it. This approach radically changes our understanding of what is computation and how it is modeled.

The acceptance of interaction as a new paradigm is hindered by the Strong Church-Turing Thesis (SCT), the widespread belief that Turing Machines (TMs) capture all computation, so models of computation more expressive than TMs are impossible. In this paper, we show that SCT reinterprets the original Church-Turing Thesis (CTT) in a way that Turing never intended; its commonly assumed equivalence to the original is a myth. We identify and analyze the historical reasons for the widespread belief in SCT. Only by accepting that it is false can we begin to adopt interaction as an alternative paradigm of computation