Lecture notes for Week 3: Complexity
Topics
-
Big-O notation
-
Big-Omega notation
-
Big-Theta notation
Textbook portions covered
- Introduction to Algorithms (Cormen et al.)
-
Chapters 3 and 4
- Engineering Algorithms...(Clowes “online book”)
-
Chapter 4
External links
Lecture 7 (Friday, 21 January 2005)
Announcements
-
Lecture notes now available.
Remarks
-
The nitty-gritty of rigorous mathematical analysis of algorithmic complexity can be intimidating.
-
The formal mathematical definitions might obscure the the basic simplicity and importance of this topic.
-
Hence, we begin with some simple, informal examples of where we are going.
Sample objectives
-
Once we have finished, you will be able to determine the complexity of many C source code functions by inspection. (i.e. You will be able to determine the BigTheta complexity in less than 30 seconds.)
-
Here are the examples:
-
int f(n) { return n+5; }
Constant time. i.e.: BigTheta(1)
-
int f(n) { int i; if (n > 123) i = n*n*n*n + 6; else i = n; return i; }
Constant time. i.e.: BigTheta(1)
-
int f(n) { int i, t = 0; int vector[10000000]; for(i = 0; i < n; i++) t += vector[i]; return t; }
Linear time. i.e.: BigTheta(n)
-
int f(n) { int i, j, t = 0; int matrix[10000000][10000000]; for(i = 0; i < n; i++) for(j = 0; j < n; j++) t += matrix[i][j]; return t; }
Quadratic time. i.e.: BigTheta(n^2)
-
int f(n) { int i, t = 0; double x; int vector[10000000]; for(i = 0; i < n; i++) for(x = 1; x < n; x *= 1.1) t += vector[i]; return t; }
Supralinear time. i.e.: BigTheta(n log n)
-
Program performance: an example
Suppose 5 programmers write software to solve some problem. All programs work. When the size of the problem is 1000, the following times are measured:
A | B | C | D | E |
---|---|---|---|---|
1 | 10 | 600 | 1000 | 10000 |
Which is best (for, say, n=10,000,000)?
Answer: Insufficient data! We also need to know the algorithmic complexity of each implementation.
For example, if we know the complexity is linear, then increasing the problem size by a factor of x will increase the time by a factor of x. Similarly, if the size of a quadratic algorithm is increased by a factor of x, the time will be multiplied by x^2.
Knowing the complexity of an implementation, we can calculate the approximate times as summarized in the following table:
n | A (k1 1.01^n) | B (k2 n^2) | C (k3 n log n) | D (k4 n) | E (k5 n) |
---|---|---|---|---|---|
exponential | quadratic | n log n (supralinear) | linear | linear | |
1000 | 1 | 10 | 600 | 1000 | 10000 |
10,000,000 | > 10^1000 years | 10^9 microseconds = 16 minutes | 14 seconds | 10^7 microseconds = 10 seconds | 10^8 microseconds = 100 seconds |
Big-O notation
-
Summary of Big-O definition: (refer to textbook/links for formal definition)
f(n) is O(g(n)) if and only if
f(n) < k g(n) for very large n.
Lectures 8/9 (Tuesday, 25 January 2005)
Announcements
-
QUIZ: The quiz is next week: Friday, Feb 4. The quiz covers the first 4 lecture weeks (up to and including Pointers and Dynamic allocation) and the first 3 labs.
-
My lecture notes now include all 13 weeks of the course. The notes for lectures that have not yet been delivered provide only preliminary outlines.
Examples BigO
- The values of k and n_0 are arbitrary (anything that proves the inequality).
- If f(n) = O(g(n)) then f(n) is also Big-O of any function that grows faster than g(n).
Examples BigOmega
- If f(n) = BigOmega(g(n)) then f(n) is also Big-Omega of any function that grows slower than g(n).
Examples BigTheta
- Tip: If f(n)) is the sum of terms, then it is Big-Theta (and hence BigO and BigOmega) of the fastest growing term (after eliminating multiplicative constants).
Math
You are responsible for:
- Floor and ceiling functions.
- O(log n!) = n log n
- Closed-form equations for arithmetic and geometric series.
- lg means log_2.
- log^k n means (log n)^k
- Note: log n grows more slowly than any polynomial. (That's why we say that BigTheta(n log n) is almost as good as linear complexity.)
- Note: Any exponential grows faster than any polynomial.
- Note: log log n grows more slowly than log^k n. (k ≥ 1)
Analyzing pseudocode or C
- Primitive statements take constant (BigTheta(1)) time.
- Any combination of primitive statements which are executed only once (or not at all!) is BigTheta(1).
- When loops are present, the problem is to determine the number of iterations as a function of problem size (i.e. linear, quadratic, logarithmic, etc.)
-
Examples
-
Refer to Sample Objectives section earlier in this week's notes.
- See also Engineering Algorithms...(Clowes “online book”) Chapter 4, Section 5.
-
Solving recurrences
- Analysis of recursive algorithms (eg. mergeSort) often requires finding the closed-form solution to a recurrence.
-
Method 1:
- Calculate values of T(n) by hand starting from the base case and then directly using the recurrence to obtain result for larger values of n.
- Find a pattern and guess the closed-form solution.
- Prove it by mathematical induction.
- Example: analysis of mergeSort done previously. (see also Engineering Algorithms...(Clowes “online book”) problem 4.6 and solution which was also done in class.)
- Other methods will be covered in the next lecture.
Suggested Problems
- Introduction to Algorithms (Cormen et al.)
-
- 3-1
- 3-3 (except iterated logarithm)
- 3-4
- Engineering Algorithms...(Clowes “online book”)
-
- 4.1
- 4.2
- 4.3
- 4.4
- 4.6
- 4.7
by Ken Clowes
v1.0