How to find time complexity of an algorithm?

The Question

How to find time complexity of an algorithm?

What have I done before posting a question on SO ?

I have gone through this, this and many other links

But no where I was able to find a clear and straight forward explanation for how to calculate time complexity.

What do I know ?

Say for a code as simple as the one below:

char h = 'y'; // This will be executed 1 time
int abc = 0; // This will be executed 1 time

Say for a loop like the one below:

for (int i = 0; i < N; i++) {        
    Console.Write('Hello World !');
}

int i=0; This will be executed only once. The time is actually calculated to i=0 and not the declaration.

i < N; This will be executed N+1 times

i++ ; This will be executed N times

So the number of operations required by this loop are

{1+(N+1)+N} = 2N+2

Note: This still may be wrong, as I am not confident about my understanding on calculating time complexity

What I want to know ?

Ok, so these small basic calculations I think I know, but in most cases I have seen the time complexity as

O(N), O(n2), O(log n), O(n!).... and many other,

Can anyone help me understand how does one calculate time complexity of an algorithm? I am sure there are plenty of newbies like me wanting to know this.


Solution 1:

How to find time complexity of an algorithm

You add up how many machine instructions it will execute as a function of the size of its input, and then simplify the expression to the largest (when N is very large) term and can include any simplifying constant factor.

For example, lets see how we simplify 2N + 2 machine instructions to describe this as just O(N).

Why do we remove the two 2s ?

We are interested in the performance of the algorithm as N becomes large.

Consider the two terms 2N and 2.

What is the relative influence of these two terms as N becomes large? Suppose N is a million.

Then the first term is 2 million and the second term is only 2.

For this reason, we drop all but the largest terms for large N.

So, now we have gone from 2N + 2 to 2N.

Traditionally, we are only interested in performance up to constant factors.

This means that we don't really care if there is some constant multiple of difference in performance when N is large. The unit of 2N is not well-defined in the first place anyway. So we can multiply or divide by a constant factor to get to the simplest expression.

So 2N becomes just N.

Solution 2:

This is an excellent article : http://www.daniweb.com/software-development/computer-science/threads/13488/time-complexity-of-algorithm

The below answer is copied from above (in case the excellent link goes bust)

The most common metric for calculating time complexity is Big O notation. This removes all constant factors so that the running time can be estimated in relation to N as N approaches infinity. In general you can think of it like this:

statement;

Is constant. The running time of the statement will not change in relation to N.

for ( i = 0; i < N; i++ )
     statement;

Is linear. The running time of the loop is directly proportional to N. When N doubles, so does the running time.

for ( i = 0; i < N; i++ ) {
  for ( j = 0; j < N; j++ )
    statement;
}

Is quadratic. The running time of the two loops is proportional to the square of N. When N doubles, the running time increases by N * N.

while ( low <= high ) {
  mid = ( low + high ) / 2;
  if ( target < list[mid] )
    high = mid - 1;
  else if ( target > list[mid] )
    low = mid + 1;
  else break;
}

Is logarithmic. The running time of the algorithm is proportional to the number of times N can be divided by 2. This is because the algorithm divides the working area in half with each iteration.

void quicksort ( int list[], int left, int right )
{
  int pivot = partition ( list, left, right );
  quicksort ( list, left, pivot - 1 );
  quicksort ( list, pivot + 1, right );
}

Is N * log ( N ). The running time consists of N loops (iterative or recursive) that are logarithmic, thus the algorithm is a combination of linear and logarithmic.

In general, doing something with every item in one dimension is linear, doing something with every item in two dimensions is quadratic, and dividing the working area in half is logarithmic. There are other Big O measures such as cubic, exponential, and square root, but they're not nearly as common. Big O notation is described as O ( <type> ) where <type> is the measure. The quicksort algorithm would be described as O ( N * log ( N ) ).

Note that none of this has taken into account best, average, and worst case measures. Each would have its own Big O notation. Also note that this is a VERY simplistic explanation. Big O is the most common, but it's also more complex that I've shown. There are also other notations such as big omega, little o, and big theta. You probably won't encounter them outside of an algorithm analysis course. ;)

Solution 3:

Taken from here - Introduction to Time Complexity of an Algorithm

1. Introduction

In computer science, the time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the string representing the input.

2. Big O notation

The time complexity of an algorithm is commonly expressed using big O notation, which excludes coefficients and lower order terms. When expressed this way, the time complexity is said to be described asymptotically, i.e., as the input size goes to infinity.

For example, if the time required by an algorithm on all inputs of size n is at most 5n3 + 3n, the asymptotic time complexity is O(n3). More on that later.

Few more Examples:

  • 1 = O(n)
  • n = O(n2)
  • log(n) = O(n)
  • 2 n + 1 = O(n)

3. O(1) Constant Time:

An algorithm is said to run in constant time if it requires the same amount of time regardless of the input size.

Examples:

  • array: accessing any element
  • fixed-size stack: push and pop methods
  • fixed-size queue: enqueue and dequeue methods

4. O(n) Linear Time

An algorithm is said to run in linear time if its time execution is directly proportional to the input size, i.e. time grows linearly as input size increases.

Consider the following examples, below I am linearly searching for an element, this has a time complexity of O(n).

int find = 66;
var numbers = new int[] { 33, 435, 36, 37, 43, 45, 66, 656, 2232 };
for (int i = 0; i < numbers.Length - 1; i++)
{
    if(find == numbers[i])
    {
        return;
    }
}

More Examples:

  • Array: Linear Search, Traversing, Find minimum etc
  • ArrayList: contains method
  • Queue: contains method

5. O(log n) Logarithmic Time:

An algorithm is said to run in logarithmic time if its time execution is proportional to the logarithm of the input size.

Example: Binary Search

Recall the "twenty questions" game - the task is to guess the value of a hidden number in an interval. Each time you make a guess, you are told whether your guess is too high or too low. Twenty questions game implies a strategy that uses your guess number to halve the interval size. This is an example of the general problem-solving method known as binary search

6. O(n2) Quadratic Time

An algorithm is said to run in quadratic time if its time execution is proportional to the square of the input size.

Examples:

  • Bubble Sort
  • Selection Sort
  • Insertion Sort

7. Some Useful links

  • Big-O Misconceptions
  • Determining The Complexity Of Algorithm
  • Big O Cheat Sheet