Question

What is the Worst Case Time Complexity t(n) :- I'm reading this book about algorithms and as an example how to get the T(n) for .... like the selection Sort Algorithm


Like if I'm dealing with the selectionSort(A[0..n-1])

//sorts a given array by selection sort
//input: An array A[0..n - 1] of orderable elements.
//output: Array A[0..n-1] sorted in ascending order

let me write a pseudocode

for i <----0 to n-2 do
  min<--i
for j<--i+1 to n-1 do
   ifA[j]<A[min] min <--j
swap A[i] and A[min]

--------I will write it in C# too---------------

private int[] a = new int[100];

// number of elements in array
private int x;

// Selection Sort Algorithm
public void sortArray()
{
  int i, j;
  int min, temp;

  for( i = 0; i < x-1; i++ )
  {
    min = i;

    for( j = i+1; j < x; j++ )
    {
      if( a[j] < a[min] )
      {
        min = j;
      }
    }

    temp = a[i];
    a[i] = a[min];
    a[min] = temp;
  }
}

==================

Now how to get the t(n) or as its known the worst case time complexity

Was it helpful?

Solution

@sara jons The slide set that you've referenced - and the algorithm therein

The complexity is being measured for each primitive/atomic operation in the for loop

for(j=0 ; j<n ; j++)
{
    //...    
}

The slides rate this loop as 2n+2 for the following reasons:

  • The initial set of j=0 (+1 op)
  • The comparison of j < n (n ops)
  • The increment of j++ (n ops)
  • The final condition to check if j < n (+1 op)
  • Secondly, the comparison within the for loop

    if(STudID == A[j])      
        return true;
    

    This is rated as n ops. Thus the result if you add up +1 op, n ops, n ops, +1 op, n ops = 3n+2 complexity. So T(n) = 3n+2

    Recognize that T(n) is not the same as O(n).

    OTHER TIPS

    That would be O(n^2).

    The reason is you have a single for loop nested in another for loop. The run time for the inner for loop, O(n), happens for each iteration of the outer for loop, which again is O(n). The reason each of these individually are O(n) is because they take a linear amount of time given the size of the input. The larger the input the longer it takes on a linear scale, n.

    To work out the math, which in this case is trivial, just multiple the complexity of the inner loop by the complexity of the outer loop. n * n = n^2. Because remember, for each n in the outer loop, you must again do n for the inner. To clarify: n times for each n.

    O(n * n).

    O(n^2)

    By the way, you shouldn't mix up complexity (denoted by big-O) and the T function. The T function is the number of steps the algorithm has to go through for a given input.

    So, the value of T(n) is the actual number of steps, whereas O(something) denotes a complexity. By the conventional abuse of notation, T(n) = O( f(n) ) means that the function T(n) is of at most the same complexity as another function f(n), which will usually be the simplest possible function of its complexity class.

    This is useful because it allows us to focus on the big picture: We can now easily compare two algorithms that may have very different-looking T(n) functions by looking at how they perform "in the long run".

    Another doctoral-comp flashback here.

    First, the T function is simply the amount of time (usually in some number of steps, about which more below) an algorithm takes to perform a task. What a "step" is, is somewhat defined by the use; for example, it's conventional to count the number of comparisons in sorting algorithms, but the number of elements searched in search algorithms.

    When we talk about the worst-case time of an algorithm, we usually express that with "big-O notation". Thus, for example, you hear that bubble sort takes O(n²) time. When we use big O notation, what we're really saying is that the growth of some function -- in this case T -- is no faster than the growth of some other function times a constant. That is

    T(n) = O(n²)

    means for any n, no matter how large, there is a constant k for which T(n) ≤ kn². A point of some confustion here is that we're using the "=" sign in an overloaded fashion: it doesn't mean the two are equal in the numerical sense, just that we are saying that T(n) is bounded by kn².

    In the example in your extended question, it looks like they're counting the number of comparisons in the for loop and in the test; it would help to be able to see the context and the question they're answering. In any case, though, it shows why we like big-O notation: W(n) here is O(n). (Proof: there exists a constant k, namely 5, for which W(n) ≤ k(3n)+2. It follows by the definition of O(n).)

    If you want to learn more about this, consult any good algorithms text, eg, Introduction to Algorithms, by Cormen et al.

    write pseudo codes to search, insert and remove student information from the hash table. calculate the best and the worst case time complexities

    3n + 2 is the correct answer as far as the loop is concerned. At each step of the loop, 3 atomic operations are done. j++ is actually two operations, not one. and j

    Licensed under: CC-BY-SA with attribution
    Not affiliated with StackOverflow
    scroll top