CMPS 144
Examples to illustrate Big-Oh notation

O(1) (Constant) time

A program (or, more generally, a code segment) runs in constant time if each of its statements runs in constant time, which is the case if it includes no loops or recursive subprograms. Even a loop runs in constant time if its body runs in constant time and it iterates a constant number of times.

Example 1.1: Loopless
/* Swaps the values in the specified 
** locations of the specified array.
*/
void swap(int[] a, int j, int k) {
   int temp = a[j];
   a[j] = a[k];
   a[k] = temp;
}
Example 1.2: Constant # of loop iterations
/* Computes sum of 1 through 100
*/
int sumSoFar = 0;
for (int i=1; i <=100; i++) {
   sumSoFar = sumSoFar + i;
}
System.out.println(sumSoFar);


O(log n) (logarithmic) time

Examples 2.1 and 2.2 compute slightly different integer approximations to logkn ("logarithm to the base k of n"), but in opposite ways. In Example 2.1, ⌊logkn⌋ is computed by initializing m to n and then repeatedly dividing m by k until such time as m is no longer greater than 1. The number of divisions performed provides the result.

In Example 2.2, ⌈logkn⌉ is computed by initializing kToCntr to 1 and then repeatedly multiplying it by k until such time as kToCntr is no longer less than n. The number of multiplications performed provides the result.

Generalizing, whenever you have a loop in which a variable is either repeatedly divided by some fixed value k until it is exceeded by some lower bound or repeatedly multiplied by some fixed value k until it exceeds some upper bound, what you have is a loop that iterates a number of times that is approximately equal to a logarithm (to the base k).

Examples 2.1, 2.2: Computing (an approximation to) logkn
/* Computes ⌊logk n⌋ by
** repeatedly dividing by k.
** pre: k > 0 && n > 0
*/
int floorOfLog(int k, int n) {
   int m = n
   int cntr = 0;
   // loop invariant: m = n / kcntr
   while (m > 1) {
      m = m / k;
      cntr = cntr + 1;
   }
   return cntr;
}
/* Computes ⌈logk n⌉ by
** repeatedly multiplying by k.
** pre: k > 0 && n > 0
*/
int ceilingOfLog(int k, int n) {
   int cntr = 0;
   int kToCntr = 1;
   // loop invariant: kToCntr = kcntr
   while (kToCntr < n) {
      kToCntr = kToCntr * k;
      cntr = cntr + 1;
   }
   return cntr;
}

Example 2.3 is binary search. During each iteration of the loop, the difference high - low is cut in half, either by increasing low half the way towards high or by decreasing high half the way towards low. Compare this to Example 2.1. The role of n is being played by the initial value of high-low. The role of m is being played by the changing value of high-low. The role of k is being played by 2. Because the loop continues iterating until high-low == 0 (rather than == 1, as in Example 1a), the number of iterations is one "extra", so it comes out to ⌊log2n⌋+1.

Example 2.3: Binary Search
/* Returns the lowest-numbered location in the specified array segment 
** a[leftBound..rightBound) containing a value not less than the specified 
** search key.  (If the search key exceeds all array elements in the segment,
** the location following the segment (i.e., rightBound) is returned.)
** pre: array elements are in ascending order
*/
static int binarySearch(int[] a, int leftBound, int rightBound, int key)
{
   int low = leftBound; 
   int high = rightBound;  // Let N = rightBound - leftBound
   int c = 0;              // loop iteration counter

   /* loop invariant: 
   **   every element in a[leftBound..low) is < key &&
   **   every element in a[high..rightBound) is >= key (together implying 
   **   that the correct answer is in the range low..high) &&
   **   high - low ≤ N/2c 
   */
   while (high - low != 0) {
      c = c + 1;
      int mid = (low+high) / 2;
      if (a[mid] < key) {
         low = mid+1;
      }
      else {  // a[mid >= key
         high = mid;
      }
   }
   return low;
}


O(n) (Linear) time

That Example 3.1 runs in linear time (i.e., time proportional to the length of the array segment that concerns it) is obvious, as its loop iterates once for each element in the given array segment and each such iteration takes constant time.

Example 3.1
/** Returns the # of occurrences of value k in a[low..high)
**  pre: 0 <= low <= high <= a.length
*/
int numOccurrences(int[] a, int low, int high, int k)
{
   int cntr = 0;
   int i = low;
   /* loop invariant: cntr = # occurrences of k in a[low..i)
   */
   while (i != high)
   {
      if (a[i] == k) { cntr++; }
      i = i+1;
   }
   /* assert: cntr = # occurrences of k in a[low..high) */
   return cntr;
}

Example 3.2
/* Method with nested loops, where the total number 
** of iterations of the nested loop is not obvious.
*/
static void tricky(int N) {
   int n = N;
   int c = 0;       // loop iteration counter
   while (n != 1) {
      for (int i=0; i != n; i++) {
         // some constant time computation
         c = c+1;
      }
      n = n / 2;
   }
}
Example 3.2 (to the right) is much more tricky. A cusory analysis would lead us to observe that the outer loop iterates ⌊log2 N⌋ times and that, during each such iteration, the nested loop iterates at most N times. (It iterates n times, and we note that n's value begins at N and decreases thereafter, so N is an upper bound on the number of iterations.) Note that it is customary to use lg as a shorthand notation for log2, and we shall do that in what follows.

From these observations, we conclude that the nested loop iterates, in total, at most N×⌊lg N⌋ times, which means that the algorithm runs in O(N log N) time.

However, a more careful analysis reveals that it runs in O(N) time. Consider how the value of n changes during execution of the method. It begins with value N but its value is cut in half during each iteration of the outer loop. Thus, during successive iterations of the outer loop, its value is N, ⌊N/2⌋, ⌊N/4⌋, etc., etc., until it reaches 1. As pointed out above, during each iteration of the outer loop, the nested loop iterates n times, so the total number of iterations of that loop is

N + ⌊N/2⌋ + ⌊N/4⌋ + ... + ⌊N/2c

where c is ⌊lg N⌋. Clearly, this sum is bounded above by

N + N/2 + N/4 + N/8 + ... (forever)

Factoring N out of each term, we get

N(1 + 1/2 + 1/4 + 1/8 + ... )

But this is bounded above by 2N, because the sum 1 + 1/2 + 1/4 + 1/8 + ... converges to 2.


O(n × log n)) (Log-linear) time

Example 4.1: Bottom-up Mergesort (pseudocode):

Example 4.1
To sort an array a[0..N) (where N is a power of 2):
segLen = 1;
/* loop invariant: For j satisfying 0 ≤ j < N/segLen, the elements in
**    segment a[j*segLen .. (j+1)*segLen) are in ascending order.
*/
while (segLen < N) {
  for (int i=0; i < N/segLen; i = i+2) {
     low = i*segLen;
     mid = (i+1)*segLen;
     high = (i+2)*segLen;
     merge segments a[low..mid) and a[mid..high) 
  }
  segLen = 2 * segLen;
}

Analysis: The while loop iterates lg N times, because segLen is initialized to 1, doubles during each loop iteration, and the loop terminates once segLen's value reaches N.

During each iteration, pairs of adjacent array segments are merged. Each merging operation takes time proportional to the sum of the lengths of the segments being merged. But the total lengths of all the segments is N, so the time required by all the iterations of the nested loop (during a single iteration of the outer loop, that is) is bounded above by cN, for some constant c.

Putting things together, we have that the outer loop iterates lg N times and that during each such iteration the nested loop takes at most cN time, so the total running time is at most the product of the two, which is cN×lg N, which is in the O(N×log N) complexity class.

Example 4.2: Given are two arrays (of int's, say) a[] and b[]. We know that the elements in b[] are in ascending order, but a[] is not so constrained. Count the number of pairs (i,j) such that a[i] = b[j].

Example 4.2
/* Returns the number of pairs (i,j) such that a[i] == b[j].
** pre: the elements in b[] are in ascending order
*/
public static int numEqualPairs(int[] a, int[] b) {
   int numFoundSoFar = 0;
   /* loop invariant: numFoundSoFar = number of pairs (i',j), 
   **       where 0≤i'<i, such that a[i'] = b[j]
   */
   for (int i=0; i != a.length; i++) {
      int k = binarySearch(b, 0, b.length, a[i]);
      int m = binarySearch(b, k, b.length, a[i]+1);
      numFoundSoFar = numFoundSoFar + (m-k);
   }
   return numFoundSoFar;
}

The idea is that, for each i, we perform a binary search in b[] for both a[i] and a[i]+1. (Note: a[i]+1, not a[i+1].) We call the results k and m, respectively. Given that the binarySearch() method returns the lowest-numbered location containing a value equal to or greater than the search key, what this means is that the segment of b[] containing values equal to a[i] is b[k..m), which has length m-k. This explains why we add m−k to numFoundSoFar.

If the lengths of a[] and b[] are M and N, we have that the loop iterates M times and each iteration, which calls binarySearch() twice, takes time proportional to lg N. Thus, the total running time is O(M×lg N).


O(n2) (Quadratic) time

Example 5.1
/* Returns the # of inversions in a[], where
** an inversion is defined to be any pair (i,j), 
** with i<j, such that a[i] > a[j].
** pre: 0 <= low <= high <= a.length
*/
int numInversions(int[] a)
{
   final int N = a.length;
   int cntr = 0;
   for (int j=1; j != N; j++) {
      for (int i = 0; i != j; i++) {
         if (a[i] > a[j])
            { cntr = cntr + 1; }
      }
   }
   return cntr;
}
For an example of an algorithm that runs in O(n2) time, Example 5.1 is offered. It computes the number of "inversions" in an array. An inversion refers to any pair of locations satisfying the condition that the larger value is in the lower-numbered location.

Value of j # iterations
of nested loop
11
22
....
....
N-2N-2
N-1N-1
Total(N2 − N)/2

The logic is straighforward: for each j in the range [1..N), we scan the elements in a[0..j) and count how many of them are greater than a[j]. For each value of j, the nested loop iterates with i assuming the values in the range [0..j), of which there are j.

To figure out how many iterations the nested loop completes, in total, we devise the table to the left, which shows, for each iteration of the outer loop, how many times the nested loop iterates. The total number of iterations of the nested loop is thus the sum of the values in the second column. But that is just the sum 1 + 2 + ... + (N−1), which any self-respecting student of computing knows comes out to (N2−N)/2, which is in the complexity class O(N2).