Suppose A(.) is a subroutine that takes as input a number in binary, and takes linear time (that is, O(n), where n is the length (in bits) of the number). Consider the following piece of code, which starts with an n-bit number x.
while x>1:
call A(x)
x=x-1
Assume that the subtraction takes O(n) time on an n-bit number.
(a) How many times does the inner loop iterate (as a function of n)? Leave your answer in big-O form.
(b) What is the overall running time (as a function of n), in big-O form?
(a) O($n^2$)
(b) O($n^3$)
is this correct? can someone concur please, the way i think about it is that the loop has to compute two steps each time in cycles through and it will cycle through x time each time subtracting 1 from n bits until x reaches 0. And for part b since A(.) takes time O(n) we multiply that with the time it takes to execute the loop and we then have the over all running time. If i reasoned or did the problem wrong can someone please correct me and tell me what i did wrong.
Supposing your code is as follows-- so that the inner-loop is both lines:
the statements A(x) & x=x-1 run at O(n) time each, at each step of the loop-- asymptotically adding up to O(n).
The loop iterates x times-- that is directly proportional to the representation of n in decimal-- so the while statement iterates O(n)-- times.
The entire code executes in O(n)*O(n)=O($n^2$).