Is there a "theoretical limit" on how much dynamic programming can speed up an algorithm?

41 Views Asked by At

Amdahl's law gives a very clear mathematical upper-bound on how much parallelization can improve a certain algorithm. I'm wondering if a similar such upper-bound exists for dynamic programming.

A related question is the following: in dynamic programming, we often hear the phrase "overlapping subproblems". In general, problems with overlapping subproblems are amenable to a dynamic programming solution. Is there a concrete way to quantify the extent to which subproblems overlap? The reason this is related to the original question is that any theoretical upper-bound on how much dynamic programming can speed up an algorithm is likely a function of how much the subproblems overlap. But I don't know if "how much the subproblems overlap" has a formal, mathematical quantification.