Understanding the running time of an algorithm that is with respect to input size n

74 Views Asked by At

I understand that we mostly measure the running time of an algorithm based on how many loops are there: For instance, the following algorithm takes roughly O(n^2).

int i;
int j;
int sum;
for i <- 1 to n
   for j <- 1 to n
       sum += (i+j);

However, I found out a post of an algorithm like this:

for i <- 0 to 10
  for j <- 1 to 20
     if (i*10 + j*20 = n)
         c <- c+1

and the answer was:

The running time of this algorithm is O(1) with respect to n

I though we would say O(10*30)!! or I don't understand what does with respect to n means!

Can someone explain the second algorithm's running time why it is O(1)?

1

There are 1 best solutions below

0
On BEST ANSWER

A function $f(x)$ is said to be $O(g(x))$ if there exists $x_0\in\mathbb R$ and $\gt0\in\mathbb R$ such that $$|f(x)|\le c|g(x)|$$ for all $x\ge x_0$

So, for any constant $k$, $f(x)=k\forall x\in\mathbb R$ is $O(1)$,as there can always be found a $c\gt k$ such that $$|k|\le c|1|$$