I have an algorithm that depending on the length of the input array and its values could take more or less operations to complete, for example, for a set with some length it could take 10.000 operations and for a set with same length but some different values could take 1.000.000 operations to process, of course the array length could change as it happens with its values.
I'm wondering how is the right way to express the time complexity for these kind of algorithms, maybe I'm not looking into the right location (operations) and I should consider another measure, maybe the right one is to focus in the min and max operations taken by the best/worst case for a single element and continue from here?
What is the best way to do this?
Jesus
Since time complexity is dealing with the worst case, your asymptotic upper bound should be above whichever input is causing the highest number of computations to take place. As long as the algorithm is always bound by this behavior, then the time complexity expression is valid.
Essentially, just assume everything is the "worst case" and determine the time complexity of the algorithm from there.