1.2.2 Asymptotic notations

Asymptotic Notations

·       When we calculate complexity of an algorithm it does not provide exact amount of resource required. So instead of taking exact amount of resource we represent that complexity in a general form (Notation) which produces the basic nature of that algorithm. We use that general form (Notation) for analysis process.

Definition- Asymptotic notation of an algorithm is a mathematical representation of its complexity.

·       Following are the commonly used asymptotic notations to calculate the running time complexity of an algorithm.

Ø  Ο Notation

Ø  Ω Notation

Ø  θ Notation

 

Big Oh Notation, Ο

·       The notation Ο(n) is the formal way to express the upper bound of an algorithm's running time.

·       It measures the worst-case time complexity or the longest amount of time an algorithm can possibly take to complete.

Big O Notation

·       For example, for a function f(n)

Ο(f(n)) = {g(n): there exists c > 0 and n0 such that f(n) ≤ c.g(n) for all n > n0.}

Big Omega Notation, Ω

·       The notation Ω(n) is the formal way to express the lower bound of an algorithm's running time.

·       That means Big - Omega notation always indicates the minimum time required by an algorithm for all input values.

·       It measures the best-case time complexity or the best amount of time an algorithm can possibly take to complete.

Omega Notation

·       For example, for a function f(n)

Ω(f(n)) ≥ { g(n) : there exists c > 0 and n0 such that g(n) ≤ c.f(n) for all n > n0. }

 

Theta Notation, θ

·       The notation θ(n) is the formal way to express both the lower bound and the upper bound of an algorithm's running time.

·       That means Big - Theta notation always indicates the average time required by an algorithm for all input values. That means Big - Theta notation describes the average case of an algorithm time complexity.

·       It is represented as follows −

Theta Notation

θ(f(n)) = {g(n) if and only if g(n) = Ο(f(n)) and g(n) = Ω(f(n)) for all n > n0.}

 

Common Asymptotic Notations

Following is a list of some common asymptotic notations −

constant

Ο(1)

logarithmic

Ο(log n)

linear

Ο(n)

n log n

Ο(n log n)

quadratic

Ο(n2)

cubic

Ο(n3)

polynomial

nΟ(1)

exponential

2Ο(n)


 

 

 

 

 

 

 

 

 

Classification of Algorithms

·       If ‘n’ is the number of data items to be processed or degree of polynomial or the size of the file to be sorted or searched or the number of nodes in a graph etc.

1.     O(1) Constant Time:

·       Next instructions of most programs are executed once or at most only a few times.

·       If all the instructions of a program have this property, we say that its running time is a constant.

·       This means that the algorithm requires the same fixed number of steps regardless of the size of the task.

·       Example:

1)     A statement involving basic operations

Here are some examples of basic operations.

­one arithmetic operation (eg., +, *)

­one assignment

­one test (eg., x==0)

­one read (accessing an element from an array)

2) Sequence of statements involving basic operations.

statement 1;

statement 2;

..........

statement k;

Time for each statement is constant and the total time is also constant: O (1)

 

2.     O(n)- linear time:

·       When the running time of a program is linear, it is generally the case that a small amount of processing is done on each input element.

·       This is the optimal situation for an algorithm that must process n inputs.

·       This means that the algorithm requires a number of steps proportional to the size of the task.

·       Examples:

1. Traversing an array.

2. Sequential/Linear search in an array.

3. Best case time complexity of Bubble sort (i.e when the elements of array are in sorted order).

Basic structure is:

for (i = 0; i < N; i++)

{

sequence of statements of O(1)

}

The loop executes N times, so the total time is N*O(1) which is O(N).

 

3.     O(n2) -Quadratic time:

·       When the running time of an algorithm is quadratic, it is practical for use only on relatively small problems.

·       Quadratic running times typically arise in algorithms that process all pairs of data items (perhaps in a double nested loop) whenever n doubles, the running time increases four-fold.

·       The number of operations is proportional to the size of the task squared.

·       Examples:

Worst case time complexity of Bubble, Selection and Insertion sort.

Nested loops:

1.     for (i = 0; i < N; i++)

{

                        for (j = 0; j < M; j++)

{

        sequence of statements of O(1)

}

}

The outer loop executes N times and inner loop executes M times so the time complexity is O(N*M)

2.     for (i = 0; i < N; i++)

{

                        for (j = 0; j < N; j++)

{

                                    sequence of statements of O(1)

                        }

}

Now the time complexity is O(N^2)

3.     let's consider nested loops where the number of iterations of the inner loop depends on the value

of the outer loop's index.

for (i = 0; i < N; i++)

{

                        for (j = i+1; j < N; j++)

{         

        sequence of statements of O(1)

}

}

Let us see how many iterations the inner loop has:

Value of i                     Number of iterations of inner loop

       0                           N­1

       1                           N­2

      .....                         .......

      N­3                        2

      N­2                        1

      N­1                        0

So the total number of times the “sequence of statements” within the two loops executes is:

(N­1)+(N­2)+.....2+1+0 which is N*(N­1)/2 or (1/2)*(N^2) ­ (1/2)* N

and we can say that it is O(N^2) (we can ignore multiplicative constant and for large problem size the dominant term determines the time complexity)

4.     O(n3): 

·       Similarly, an algorithm that process triples of data items (perhaps in a triple– nested loop) has a cubic running time and is practical for use only on small problems. Whenever n doubles, the running time increases eight-fold.

 

5.     O(log n) ­ logarithmic time:

·       When the running time of a program is logarithmic, the program gets slightly slower as n grows. This running time commonly occurs in programs that solve a big problem by transforming it into a smaller problem, cutting the size by some constant fraction.,

·       When n is a million, log n is a doubled whenever n doubles, log n increases by a constant, but log n does not double until n increases to n2.

·       Examples: Binary search in a sorted array of n elements.

 

6.     O(n log n) ­ "n log n " time:

·       This running time arises for algorithms but solve a problem by breaking it up into smaller sub-problems, solving them independently, and then combining the solutions. When n doubles, the running time more than doubles.

 

Analyzing Algorithms

·       Suppose ‘M’ is an algorithm, and suppose ‘n’ is the size of the input data. Clearly the complexity f(n) of M increases as n increases.

·       It is usually the rate of increase of f(n) we want to examine. This is usually done by comparing f(n) with some standard functions. The most common computing times are:

O(1), O(log2 n), O(n), O(n. log2 n), O(n2), O(n3), O(2n), n! and nn

 

Numerical Comparison of Different Algorithms

The execution time for six of the typical functions is given below:

Graph of log n, n, n log n, n2, n3, 2n, n! and nn

 

Practice Questions on Time Complexity Analysis

1.     What is the time complexity of following code?

int i, j, k = 0;

for (i = n / 2; i <= n; i++)

{

for (j = 2; j <= n; j = j * 2)

{

k = k + n / 2;

}

      }

     Answer:

In above code, j keeps doubling till it is less than or equal to n. Number of times, we can double a number till it is less than n would be log(n).

Let’s take the examples here.

for n = 16, j = 2, 4, 8, 16

for n = 32, j = 2, 4, 8, 16, 32

So, j would run for O(log n) steps.

i runs for n/2 steps.

So, total steps = O(n/ 2 * log (n)) = O(n*logn)

2.     What is the time complexity of following code?

int a = 0;

for (i = 0; i < N; i++)

{

for (j = N; j > i; j--)

{

a = a + i + j;

}

}

Answer:

The above code runs total no of times

= N + (N – 1) + (N – 2) + … 1 + 0

= N * (N + 1) / 2

= 1/2 * N^2 + 1/2 * N

O(N^2) times.

 

3.     What is the time, space complexity of following code?

int a = 0, b = 0;

for (i = 0; i < N; i++)

{

    a = a + rand();

}

for (j = 0; j < M; j++)

{

    b = b + rand();

}

Answer:

The first loop is O(N) and the second loop is O(M). Since we don’t know which is bigger, we say this is O(N + M). This can also be written as O(max(N, M)).

 

4.     What is the time, space complexity of following code?

int count = 0;

for (int i = 0; i < N; i++)

    for (int j = 0; j < i; j++)

        count++;

Answer: Let’s see how many times count++ will run.

When i=0, it will run 0 times.
When i=1, it will run 1 times.
When i=2, it will run 2 times and so on.

Total number of times count++ will run is 0+1+2+...+(N−1)=N(N−1)/2. So the time complexity will be O(N2).

 

0 Comments:

Post a Comment