I want to compute the average of a set of number, which is constantly increasing. I guess it's simple, but I'm struggling here:
Let's say:
1st iteration:
m(i)=a+b/2
2nd iteration:
m(i+1)=a+b+c/3
ect ...
Is there a simple way to take advantage of the computation done in the previous iteration in the next one ?
If I've understood the problem correctly, you have a set of numbers, say n, and you want the average of that, then your set grows to n+1, and you'd like the new average.
One thing you could do is always hold the accumulator (sum) of your set of variables. That way, you'd only need to divide by n to return the mean. This doesn't really make sense for a small set of numbers, but if you had a million or more, then this would be vastly quicker - you'd only be talking about a single addition and division to complete the mean.
Your storage requirements are O(1), not O(n). Again, it depends on your set size