Big-O Notation of $\sum_{i=1}^{n} c \cdot x_i$ vs. $c \cdot \sum_{i=1}^{n} x_i$

39 Views Asked by At

We have the following two algorithms and need to determine the big O-notation:

Algorithm 1 $$ \sum_{i=1}^{n} c \cdot x_i $$

Algorithm 2 $$ c \cdot \sum_{i=1}^{n} x_i $$

Although I studied the topic some hours, I am not really comfortable with the analysis of algorithm, so I would be happy, if you could give me some inputs on it:

I started with the second one, as it looks easier to me. If we have an array of length n we sum over every element so that would give O(n). Then we multiply once with a constant c. Now I am struggling with the analysis. Does that mean that we add 1 to our big O notation, hence O(n+1)? But I learned that we can drop constants so that would lead us back to O(n).

The second algorithm was more difficult to me. I think it takes longer to execute it, because every element in the array is multiplied with c and only then we can add it to the sum. So intuitively I would say that we double the effort. The best idea I can come up with, would be O(2n). As every step element in the array is multiplied and summed up. But also here we can drop the constant, right? So it would also be O(n).

I would be happy, if you could help me, streamline my thoughts in the right direction. The exercise suggests that they yield the same result, but differ in their complexity, so my analysis must somehow be wrong.

The exercise also suggests to think about time and space complexity, which it denotes as T and O. Now suppose we find that both would be O(n), what is then T? Or in general, what is difference between those two in this specific example?

Thanks a lot for you help!

1

There are 1 best solutions below

4
On BEST ANSWER

You are generally correct, but off by one in some of your calculations.

  1. In the first algorithm, you have $n$ multiplications (you multiply each of the $n$ numbers by $c$), followed by $n-1$ additions (remember, to add $n$ numbers, you only need $n-1$ additions! The addition of the second to the first, then the addition of the third to the result, and so on). So in total, you have $n+n-1 = 2n-1$ operations. This is, of course, an element of $O(n)$.
  2. In the second algorithm, you have $n-1$ additions plus one multiplication, so you need $n$ operations again, and the whole job is again in $O(n)$.

You are completely incorrect here though:

The exercise suggests that they yield the same result, but differ in their complexity, so my analysis must somehow be wrong.

This is not true. Two algorithms can have the same big-o complexity yet one can be vastly faster than the other.