If I know correctly, subset sum problem is NP-complete. Here you have an array of n integers and you are given a target sum t, you have to return the numbers from the array which can sum up to the target (if possible).
But can't this problem be solved in polynomial time by dynamic programming method where we construct a table n X t and take cases like say last number is surely included in output and then the target becomes t- a[n]. In other case, the last number is not included, then the target remains same t but array becomes of size n-1. Hence this way we keep reducing size of problem.
If this approach is correct, isn't the complexity of this n * t, which is polynomial? and so if this belongs to P and also NP-complete (from what I hear) then P=NP.
Surely, I am missing something here. Where is the loophole in this reasoning?
Thanks,
This is one of those fine points sometimes neglected when learning about the subject. The efficiency of an algorithm is always measured in regards to the size of representation of the input - how much bits you need to encode it. In the case of numbers this distinction is crucial since the number $n$ is usually represented by $\lg n$ (log in the base 2) of bits. Hence, a solution that is $O(n)$ is exponential in the input size, hence extremely inefficient.
The classic example for this distinction is primality checking; even the most naive algorithm is $O(n)$, but we can't think of something like this as truly efficient even if we adopt a real-life approach - we can (and do) work with numbers with hundereds of digits on a daily basis, and usual arithmetic with those numbers is quite fast (being polynomial in the number of digits), but naive primality testing methods will never finish in real life even for numbers with a hundred digits or so.
The same danger lurks in any problem involving numbers, in particular subset sum.