I am trying to write a proof that a certain algorithm cannot be calculated faster than some limit I have found experimentally (by comparing input size against the time taken by the algorithm to finish).
It is evident from the runtime plots I have generated what is the computational complexity of the algorithm I have. However, giving a proof appears to be very complicated. Nevertheless, I have a working-idea:
Because the algorithm is of the divide-and-conquer type, I want to show that calculating one part does not give any information about the other part
Is this kind of approach solid?
I was thinking this through a simpler example than my actual algorithm: suppose I have a set of numbers, {2, 4, 6, 8}, that I want to sum. I can split the problem into two parts: take the sum of {2,4} and {6,8} separately and the sum of the sums. Because the sum of {2,4} does not tell anything about sum {6,8}, it will necessarily add up time taken by the algorithm.
Of course, this is still far away from a formal proof. I would appreciate any help :)