Let's assume that we're given the num[N], an array of N positive integers in an ascending order. For instance, let's assume that N=10, and num[N] is the following:
1 2 3 6 7 9 11 22 44 50
Let min[i][j] denote
min[i][j] = min_{i <= k <= j} |num[k]-num[i]| + |num[k]-num[i+1]| + ... + |num[k] - num[j]|
I thought the only way of figuring out min[i][j] was to calculate and compare all j-i+1 cases and finding out the minimum among them, but I found the following recursion:
base case: min[i][i] = 0, min[i][i+1] = num[i+1]-num[i]
min[i][j]=min[i][j-1] + num[j]-num[(i+j)/2]
where (i+j)/2 denotes the greatest integer not greater than (i+j)/2.
I tried to think of why this works, but I didn't figure out. Or is it wrong? Are there some test cases in which this doesn't work? Any help would be greatly appreciated.
That solution seems to be correct (I might add a proof later) but it's really strange. A simpler solution would be
min[i][i] = 0min[i][i+1] = abs(num[i] - num[j])min[i][j] = abs(num[i] - num[j]) + min[i+1][j-1]This works because we know some $k$ that isn't the first or last element will be optimal for arrays with length greater than 2. And the distance from the first to the last element of an array is the same as the distance from the first element to some middle element plus the distance from that middle element to the last.