Good source of learning Complexity of an Algorithm.

58 Views Asked by At

When we are discussing an algorithm, we give the complexity in terms of $\mathcal{O}$. What is the difference between complexity, number of operations and running time? I know how to compute the number of operations of an algorithm and relate it to $\mathcal{O}$ notation. For example, if $n$ is the dimension of a matrix $n^3$ is the number of operations of determinant computation. Can someone explain what $n^\omega$ is? Also $n^{\omega+\epsilon} $? What are $\Theta$, $\mathcal{O}(n\log{n})$, $\Omega$ etc. Can someone recommend a book or a good website to learn about these complexities? Thank you

1

There are 1 best solutions below

4
On BEST ANSWER

I believe 'Introduction to Algorithms' is the best book I have read about this particular topic. Apart from a lot of different algorithms and data structures being described, the book also explains the different notations and how to use them. Several proofs are included, especially how to use induction proofs for complexity equations and recurrence relations.

The book can be found on here.

To answer some of you questions, number of operations are different from its complexity, since the complexity of an algorithm (or a function, or whatever) is a general estimate of its running time. So, given a $N X N$ matrix which you want to iterate gives a running time of $\Theta(n^2)$. This estimate does not include number of operations, since you might use a for loop, where an initialization, conditional check and variable update is used as 3 operations. So, the exact run time of each algorithm may vary depending on programming language and obviously also computer.

Also, to explain the difference of $\Theta$, $O$ and $\Omega$: The $O$-notation is used to represent the upper bound of running time of a computation, meaning that the computation will take no longer than its argument, the $\Omega$-notation is the opposite, since it represents the lower bound of running time of a computation, and the $\Theta$-notation is somewhere in between the previous two, because it represents the actual generally estimated running time of a computation. For example, if I were to iterate an array of size $n$, I would say it has a running time of $Theta(n)$, since that is the actual running time estimate. In this case, I could also use $O(n)$, since I am sure it does not run any longer than $\Theta(n)$.