How to figure out the time complexity for an algorithm with dynamic input parameters?

268 Views Asked by At

I have an algorithm that depending on the length of the input array and its values could take more or less operations to complete, for example, for a set with some length it could take 10.000 operations and for a set with same length but some different values could take 1.000.000 operations to process, of course the array length could change as it happens with its values.

I'm wondering how is the right way to express the time complexity for these kind of algorithms, maybe I'm not looking into the right location (operations) and I should consider another measure, maybe the right one is to focus in the min and max operations taken by the best/worst case for a single element and continue from here?

What is the best way to do this?

Jesus

2

There are 2 best solutions below

6
On BEST ANSWER

Since time complexity is dealing with the worst case, your asymptotic upper bound should be above whichever input is causing the highest number of computations to take place. As long as the algorithm is always bound by this behavior, then the time complexity expression is valid.

Essentially, just assume everything is the "worst case" and determine the time complexity of the algorithm from there.

0
On

Time complexity is (usually) the worst-case time complexity, an upper bound, if no other measure is specified. Alternatives include "average time" complexity (which really only makes sense if the set of inputs has a known probability distribution) and even "best time" complexity, which is a lower bound.

In the simplest case, can you find a simple expression that depends on the details of the algorithm that bounds the runtime? For example, "the product of the length of the list and the largest coefficient in the list" (assuming list elements have "coefficients" or can be coerced into numerical representation of difficulty)? Then the right thing to do is express this relationship: "$O(n M)$, where $n$ is the length of the list and $M$ is the largest coefficient in the list." This is somewhat finer grained than just taking the worst possible outcome when you know more due to algorithm analysis.

Sometimes one has a two-stage algorithm (for a simple example, there are obvious extensions) with a parameter where one stage is faster or slower and the other is slower or faster, respectively, depending on the parameter. (The Number Field Sieve is such an algorithm, with respect to the size of the factor base providing a tradeoff in sieving time or relation finding time.) There is clearly a trade-off between speed in the one stage and speed in the other. While it might be wonderful to know what this trade-off is and use it to select the best value of the parameter, frequently the best one can do is sum the worst case in each stage and observe that the result in practice will frequently be much better.